Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Developing Default Logic as a
Theory of Reasons
John Horty
University of Marylandwww.umiacs.umd.edu/users/horty
1
Introduction
1. Tools for understanding deliberation/justification:
Standard deductive logic
Decision theory (plus probability, inductivelogic)
2. But ordinairly, we seem to focus on reasons—inboth deliberation and justification
3. Examples:
We should eat at Obelisk tonight
Racoons have been in the back yard again
4. This could be an allusion, or an abbreviation, orheuristic
But it could also be right . . .
. . . an idea common in epistemology, and especiallyin ethics
2
5. Common questions about reasons:
Relation between reasons and motivation?
Relation between reasons and desires?
Relation between reasons and values?
Objectivity of reason?
6. A different question:
How do reasons support actions or conclu-sions?
What is the mechanism of support?
3
7. One answer (numerical weighing):
. . . my way is to divide half a sheet ofpaper by a line into two columns; writingover the one Pro, and over the other Con
Then . . . I put down under the differentheads short hints of the different motives,that at different times occur to me, for oragainst the measure.
When I have thus got them all togetherin one view, I endeavor to estimate theirrespective weights; and where I find two,one on each side, that seem equal, I strikethem both out.
If I find a reason pro equal to some tworeasons con, I strike out the three. If I judgesome two reasons con, equal to three rea-sons pro, I strike out the five; and thus pro-ceeding I find at length where the balancelies . . .
(Benjamin Franklin,letter to Joseph Priestley)
Objection: too simple, optimistic
4
8. Another answer (qualitative weighing):
Each reason is associated with a metaphor-ical weight. This weight need not be any-thing so precise as a number; it may be anentity of some vaguer sort.
The reasons for you to φ and those foryou not to φ are aggregated or weighedtogether in some way. The aggregate issome function of the weights of the indi-vidual reasons.
The function may not be simply addi-tive, as it is in the mechanical case. It maybe a complicated function, and the specificnature of the reasons may influence it.
Finally, the aggregate comes out in favorof your φing, and that is why you oughtto φ.
(John Broome, Reasons)
Objection: What are these vaguer entities? Whatis the aggregating function?
5
9. An analogy:
Compare
(1) How do reasons support conclusions?
to
(2) How do premises support conclusions?
Answers to (2) provided by Frege and Tarski
We want an answer to (1) at the same level ofrigor, evaluated by the same standards
10. My answer:
Reasons are (provided by) defaults
The logic of defaults tells us how reasonssupport conclusions
6
11. Talk outline:
Prioritized default logic
Extensions, scenarios
Triggering, conflict, defeat
Binding defaults, proper scenarios
Elaborating the theory
Variable priorities
Undercutting (exclusionary) defeat
Exclusion and priorities
Applications and open questions
Moral particularism
Floating conclusions
7
Fixed priority default theories
1. Notation:
Propositions: A, B, C, . . . ,>
Background language: ∧, ∨, ¬, ⇒
Consequence: `
Logical closure: Th(E) = {A : E ` A}
2. Example:
Tweety is a bird
Therefore, Tweety is able to fly
Why? There is a default that birds fly
Tweety is a bird
Tweety is a penguin
Therefore, Tweety is not able to fly
Because there is a (stronger) default that penguinsdon’t fly
3. Default rules: X → Y
Example: B(t) → F(t)
Instance of: B(x) → F(x) (“Birds fly”)
8
4. Premise and conclusion:
If δ = X → Y , then
Prem(δ) = X
Conc(δ) = Y
If D set of defaults, then
Conc(D) = {Conc(δ) : δ ∈ D}
5. Priority ordering on defaults (strict, partial)
δ < δ′ means: δ′ stronger than δ
6. Priorities have different sources:
Specificity
Reliability
Authority
Our own reasoning
For now, take priorities as fixed, leading to . . .
9
P
F
B
d 1 < d
2
d 1
d 2
7. A fixed priority default theory is a tuple
〈W ,D, <〉
where W contains ordinary statements, D containsdefaults, and < is an ordering
8. Example (Tweety Triangle):
W = {P, P ⇒ B}D = {δ1, δ2}δ1 = B → F
δ2 = P → ¬Fδ1 < δ2
(P = Penguin, B = Bird, F = Flies)
9. Main question: what can we conclude from sucha theory?
10
10. An extension E of 〈W ,D, <〉 is a belief set an idealreasoner might settle on, based this information
Usually defined directly, but we take roundaboutroute . . .
11. A scenario based on 〈W ,D, <〉 is some subset S ofthe defaults D
12. A proper scenario is the “right” subset of defaults
13. An extension E based on 〈W ,D, <〉 is a set
E = Th(W ∪ Conc(S))
where S is a proper scenario
11
P
F
B
d 1 < d
2
d 1
d 2
14. Returning to example: 〈W ,D, <〉 where
W = {P, P ⇒ B}D = {δ1, δ2}δ1 = B → Fδ2 = P → ¬Fδ1 < δ2
Four possible scenarios:
S1 = ∅S2 = {δ1}S3 = {δ2}S4 = {δ1, δ2}
But only S3 proper (“right”), so extension is
E3 = Th(W ∪ Conc(S3))= Th({P, P ⊃ B} ∪ {¬F})= Th({P, P ⊃ B,¬F}),
15. Immediate goal: specify proper scenarios
12
Binding defaults
1. If defaults provide reasons, binding defaults pro-vide good reasons—forceful, or persuasive, in acontext of a scenario
Defined through preliminary concepts:
Triggering
Conflict
Defeat
2. Triggered defaults:
TriggeredW ,D,<(S) = {δ ∈ D : W ∪ Conc(S) ` Prem(δ)}
3. Example: 〈W ,D, <〉 with
W = {B}D = {δ1, δ2}δ1 = B → Fδ2 = P → ¬F
δ1 < δ2
Then
TriggeredW ,D,<(∅) = {δ1}
4. Terminology question: What are reasons?
Answer: Reasons are antecedents oftriggered defaults
13
R Q
P
d 1 d
2
5. Conflicted defaults:
ConflictedW ,D,<(S) = {δ ∈ D : W ∪ Conc(S) ` ¬Conc(δ)}
6. Example (Nixon Diamond):
Take 〈W ,D, <〉 with
W = {Q, R}D = {δ1, δ2}δ1 = Q → Pδ2 = R → ¬P
< = ∅.
(Q = Quaker, R = Republican, P = Pacifist)
Then
TriggeredW ,D,<(∅) = {δ1, δ2}ConflictedW ,D,<(∅) = ∅
But
ConflictedW ,D,<({δ1}) = {δ2}ConflictedW ,D,<({δ2}) = {δ1}
14
P
F
B
d 1 < d
2
d 1
d 2
7. Basic idea: A default is defeated if there is astronger reason supporting a contrary conclusion
DefeatedW ,D,<(S) = {δ ∈ D : ∃ δ′ ∈ TriggeredW ,D,<(S).
(1) δ < δ′
(2) Conc(δ′) ` ¬Conc(δ)}.
8. Example of defeat (Tweety, again):
〈W ,D, <〉 where
W = {P, P ⇒ B}D = {δ1, δ2}δ1 = B → Fδ2 = P → ¬F
δ1 < δ2
Here, δ1 is defeated:
DefeatedW ,D,<(∅) = {δ1}
15
P
F
B
d 1 < d
2
d 1
d 2
9. Finally, binding defaults:
BindingW ,D,<(S) = {δ ∈ D : δ ∈ TriggeredW ,D,<(S)
δ 6∈ ConflictedW ,D,<(S)
δ 6∈ DefeatedW ,D,<(S)}
10. Stable scenarios: S is stable just in case
S = BindingW ,D,<(S)
11. Example (Tweety, yet again): four scenarios
S1 = ∅S2 = {δ1}S3 = {δ2}S4 = {δ1, δ2}
Only S3 = {δ2} is stable, because
S3 = BindingW ,D,<(S3)
16
Three complications
1. Complication #1: Can we just identify the properscenarios with the stable scenarios?
Almost . . . but not quite
2. Problem is “groundedness”
Take 〈W ,D, <〉 with
W = ∅D = {δ1}δ1 = A → A< = ∅.
Then S1 = {δ1} is a stable scenario, but shouldn’tbe proper
The belief set generated by S1 is
Th(W ∪ Conc(S)) = Th({A})
but that’s not right!
17
3. Solution:
Let
ThS(W) =Formulas provable from W when ordi-nary inference rules supplemented withdefaults from S
Then given theory 〈W ,D, <〉, define scenario S asgrounded in W iff
Th(W ∪ Conc(S)) = ThS(W)
Finally, given 〈W ,D, <〉, define S as proper scenariobased on this theory iff
S is (i) stable and (ii) grounded in W
18
d 1 < d
2
d 1
d 2
A
4. Complication #2: Some theories have no properscenarios, and so no extensions
Example: 〈W ,D, <〉 with
W = ∅D = {δ1, δ2}δ1 = > → Aδ2 = A → ¬A
δ1 < δ2
5. Options:
Syntactic restrictions to rule out “viciouscycles”
Generalize definition of proper scenario, us-ing tools from truth theory
Live with it (benign choice if we like “skep-tical” theory)
19
R Q
P
d 1 d
2
6. Complication #3: Some theories have multiple
proper scenarios, and so multiple extensions
Example: Nixon Diamond, again
Take 〈W ,D, <〉 with
W = {Q, R}D = {δ1, δ2}δ1 = Q → Pδ2 = R → ¬P< = ∅.
Then two proper scenarios
S1 = {δ1}S2 = {δ2}
and so two extensions:
E1 = Th({Q, R, P})E2 = Th({Q, R,¬P})
So . . . what should we conclude?
20
7. Consider three options:
#1. Choice: pick an arbitrary proper scenario
Sensible, actually
But hard to codify as a consequence rela-tion
#2. Brave/credulous: give some weight to any con-clusion A contained in some extension
• Epistemic version (crazy): Endorse A when-ever A is contained in some extension
Example: P and ¬P in Nixon case
• Epistemic version (not crazy): Endorse B(A)—A is “believable”—whenever A is containedin some extension
Example: B(P) and B(¬P) in Nixon case
• Practical version: Endorse ©(A)—A is an“ought”—whenever A is contained in someextension
Example: ©(P) and ©(¬P) in Nixon case
#3. Cautious/“Skeptical”: endorse A as conclusionwhenever A contained in every extension
Defines consequence relation, and not weird:supports neither P nor ¬P in Nixon case
Note: most popular option, but some prob-lems . . .
21
Elaborating default logic
1. Discuss here only two things:
Ability to reason about priorities
Treatment of “undercutting” or “exclusion-ary” defeat
2. Begin with first problem
So far, fixed priorities on default rules
But we can reason about default priorities . . . andthen use the priorities we arrive at to control ourreasoning
22
3. Five steps:
#1. Add priority statements (δ7 < δ9) toobject language
#2. Introduce new variable priority default theories
〈W ,D〉
with priority statements now belonging to Wand D
#3. Add strict priority axioms to W:
δ < δ′ ⇒ ¬(δ′ < δ)
(δ < δ′ ∧ δ′ < δ′′) ⇒ δ < δ′′
#4. Lift priorities from object to meta language
δ <S δ′ iff W ∪ Conc(S) ` δ < δ′.
#5. Proper scenarios for new default theories:
S is a proper scenario based on 〈W ,D〉
iff
S is a proper scenario based on 〈W ,D, <S〉
23
R Q
P
d 2
d 1
d 2 < d
1
d 3
d 4
d 1 < d
2
4. Example (Extended Nixon Diamond):
Consider 〈W ,D〉 where
W contains Q, P
D contains
δ1 = Q → P
δ2 = R → ¬P
δ3 = > → δ2 < δ1
δ4 = > → δ1 < δ2
δ5 = > → δ4 < δ3
Then unique proper scenario is
S = {δ1, δ3, δ5}
24
5. Example (Perfected Security Interest):
Consider 〈W ,D〉 where
W contains
Possession
¬Documents
Later(δSMA, δUCC )
Federal(δSMA)
State(δUCC )
D contains
δUCC = Possession → Perfected
δSMA = ¬Documents → ¬Perfected
δLP = Later(δ, δ′) → δ < δ′
δLS = Federal(δ) ∧ State(δ′) → δ′ < δ
δLSLP = > → δLS < δLP
Unique proper scenario is
S = {δLSLP, δLP , δUCC}
25
6. Undercutting defeat (epistemology), compared torebutting defeat
Example:
The object looks red
My reliable friend says it is not red
Drug 1 makes everything look red
7. Exclusionary reasons (practical reasoning)
Example (Colin’s dilemma, from Raz):
Should son go to private school??
The school provides good education
He’ll meet fancy friends
The school is expensive
Decision would undermine public education
Promise: only consider son’s interests . . .
8. How can this be represented?
One view (Pollock): undercutting a separate formof defeat
My suggestion:
Only ordinary (rebutting) defeat
Enhance the language slightly
Tweak the notion of triggering
26
9. Four steps:
#1. New predicate Out, so that Out(δ) means thatδ is undercut, or excluded
#2. Introduce new exclusionary default theories astheories in a language containing Out.
#3. Lift notion of exclusion from object to metalanguage: where S is scenario based on theorywith W as hard information
δ ∈ ExcludedS iff W ∪ Conc(S) ` Out(δ).
#4. Only defaults that are not excluded can be trig-gered:
TriggeredW ,D,<(S) = {δ 6∈ ExcludedS and
W ∪ Conc(S) ` Prem(δ)}
27
S L
R
d 2
d 1
D1
d 1 < d
2
d 1 < d
3
d 3
10. Example: For ordinary rebutting defeat, take 〈W ,D〉where
W contains L, S, and δ1 < δ2, δ1 < δ3
D contains
δ1 = L → R
δ2 = S → ¬R
δ3 = D1 → Out(δ1)
(L = Looks red, R = Red, S = Statement byfriend, D1 = Drug 1)
So proper scenario is
S = {δ2}
generating the extension
E = Th(W ∪ {¬R})
28
S L
R
d 2
d 1
D1
d 1 < d
2
d 1 < d
3
d 3
11. Example: For undercutting, or exclusionary, de-feat, take 〈W ,D〉 where
D contains
δ1 = L → R
δ2 = S → ¬R
δ3 = D1 → Out(δ1)
W contains L, D1, and δ1 < δ2, δ1 < δ3
So proper scenario is
S = {δ3}
generating the extension
E = Th(W ∪ {Out(δ1)})
29
d 1 < d
2
d 1 < d
3 < d
4
S L
R
d 2
d 1
D1
d 3
D2
d 4
12. Example: Drug 2 is an antidote to Drug 1, so foran excluder excluder, take 〈W ,D〉 where
W contains L, D1, D2, δ1 < δ2, and δ1 < δ3 < δ4
D contains
δ1 = L → R
δ2 = S → ¬R
δ3 = D1 → Out(δ1)
δ4 = D2 → Out(δ3)
So proper scenario is
S = {δ1, δ4}
generating the extension
E = Th(W ∪ {R, Out(δ3)})
30
14. Example (Colin’s dilemma, simplified):
Let D contain
δ1 = E → Sδ2 = U → ¬Sδ3 = ¬Welfare(δ2) → Out(δ2)
(E = Provides good education, S = Send son toprivate school, U = Undermine support for publiceducation)
The default δ3 is itself an instance of
¬Welfare(δ) → Out(δ),
Let W contain E, U , and ¬Welfare(δ2)
Then proper scenario is
S = {δ1, δ3}
generating the extension
E = Th(W ∪ {S,Out(δ2)})
31
Exclusion and priorities
1. Can exclusion be defined in terms of priority ad-justment?
Many people think so...
Perry: “an exclusionary reason is simply thespecial case where one or more first-orderreasons are treated as having zero weight”
Dancy: “If we are happy with the idea thata reason can be attenuated . . . , why shouldwe fight shy of supposing that it can bereduced to nothing”
Schroeder: “undercutting” is best analyzedas an extreme case of attenuation in thestrength of reasons; he refers to this thesisas the “undercutting hypothesis”
Horty: developed a formal theory of ex-clusion as the assignment to a default ofa priority that falls below some particularthreshold
2. But this idea entials
Downward closure for exclusion: if δ is ex-cluded and a δ′ < δ, then δ′ is excluded.
32
B A
P
d 2
d 1
C
d 1 < d
2
d 2 < d
3
d 3
3. Example:
Take 〈W ,D〉 with
W = {A, B, C, δ1 < δ2, δ2 < δ3}
D = {δ1, δ2, δ3}
δ1 = A → P
δ2 = B → ¬P
δ3 = C → Out(δ2)
Then the proper scenario is
S = {δ1, δ3}
generating the extension
E = Th(W ∪ {P,Out(δ2)})
So downward closure fails, but is that right?
33
B A
P
d 2
d 1
C
d 1 < d
2
d 2 < d
3
d 3
4. First interpretation (mathematicians):
Priority ordering represents reliability of themathematicians
P = Some conjecture
A = First mathematician’s assertion thathe has proved P
B = Second mathematician’s assertion thatshe has proved ¬P
C = Third mathematician’s assertion thatsecond mathematician is too unreliableto be trusted
Here downward closure seems to hold
34
B A
P
d 2
d 1
C
d 1 < d
2
d 2 < d
3
d 3
5. Second interpretation (officers):
Captain < Major < Colonel
P = Some action to perform (or not)
A = Captain’s command to perform P
B = Major’s command not to perform P
C = Colonel’s command to ignore Major’scommand
Here downward closure seems to fail
35
B A
P
d 2
d 1
C
d 1 < d
2
d 2 < d
3
d 3
6. So if downward closure fails, what do we do whenwe want downward closure?
Answer: Supplement hard information with
(Out(δ) ∧ δ′ < δ) ⊃ Out(δ′),
This give us the proper scenario
S = {δ3}
generating the extension
E = Th(W ∪ {Out(δ2)})
36
B A
P
d 1 < d
2
d 1 d
2
7. Next question: a default cannot be defeated by aweaker default, but can it be excluded by a weakerdefault?
Yes, on current account.
Take 〈W ,D〉 with
W = {A, B, δ1 < δ2}D = {δ1, δ2}δ1 = A → Out(δ2)δ2 = B → P
Then the proper scenario is
S = {δ1}
generating the extension
E = Th(W ∪ {Out(δ2)})
37
B A
P
d 1 < d
2
d 1 d
2
8. Pollock’s answer: No
It seems apparent that any adequate ac-count of justification must have the conse-quence that if a belief is unjustified relativeto a particular degree of justification, thenit is unjustified relative to any highter de-gree of justification. (Cognitive Carpentry,p 104)
9. I disagree: different standards of legal evidence,jailhouse snitch
10. But what do we do in cases where we do want thisconstraint? (Military officer interpretation)
38
B A
P
d 1 < d
2
d 1 d
2
11. My (tentative) suggeston: suppose defaults areprotected from exclusion
Begin with 〈W ,D〉, where
W = {A, B, δ1 < δ2}D = {δ1, δ2}δ1 = A → Out(δ2)δ2 = B → P ∧ ¬Out(δ2)
Then the proper scenario is
S = {δ2}
generating the extension
E = Th(W ∪ {P ∧ ¬Out(δ2)})
39
An application: moral particularism
1. Dancy’s argument for moral particularism
Four steps:
1. Principles state the force of reasons
2. Reasons vary in force (valence, polarity) fromcontext to context
3. There are no reasons with constant force
4. Moral principles must be wrong, and so cannotguide our action
2. OK, why believe Step 2 ??
Examples, such as . . .
Context A: Things are normal
Object looks redThis is a reason for taking it to be red
Context B: I have taken Drug #3, whichflips red and blue
Object looks redThis is a reason for taking it to be blue
3. Go back to theory of reasons:
• Reasons are antecedents of triggered defaults
40
4. First representation: 〈W ,D〉 where
D contains
δ1 = L → R
δ2 = L ∧ D3 → B
W contains L, D3, ¬(R ∧ B), and δ1 < δ2
Proper scenario is
S1 = {δ2}
generating the extension
E = Th(W ∪ {B})
So on this representation
L is, in fact, a reason for R
It is just defeated by L ∧ D3
But, Dancy:
It is not as if it is some reason for me tobelieve that there is something red beforeme, but that as such . . . it is overwhelmedby contrary reasons. It is no longer any rea-
son at all to believe that there is somethingred before me . . .
41
5. Second representation: 〈W ,D〉 where
D contains
δ1 = L → R
δ2 = L ∧ D3 → B
δ3 = D3 → Out(δ1)
W contains L, D3, ¬(B ∧ R), and δ1 < δ2
Proper scenario is
S2 = {δ2, δ3}
generating the extension
E = Th(W ∪ {B,Out(δ1)})
So on this representation
L is not a reason for R
9. Upshot: reason holism does not imply absence ofa principled framework.
42
6. Dancy’s practical example
Context A: Things are normal
I borrowed a book from youThis is a reason for returning it to you
Context B: Book is stolen from library
I borrowed a book from youThis is not a reason for returning it to you
Dancy, again:
It is not that I have some reason to returnit to you and more reason to put it back inthe library. I have no reason at all to returnit to you.
7. My intuitions are different: five possibilities
8. But all viewpoints can be represented, and what isa reason and what isn’t in each case can be decidedin a systematic way, by appeal to principles
43
An open question: floating conclusions
1. Getting from scenario S to extension E
Direct route:
E = Th(W ∪ Conc(S))
Indirect route:
Defaults are rules of inferenceConstruct arguments to support conclusionsExamples:
> ⇒ A → B ⇒ ¬C
> ⇒ Q → P
> ⇒ R → ¬P
support conclusion ¬C, P , ¬P .
So, first form argument extension ΦThen take conclusions supported by Φ
2. Function ∗ maps arguments into conclusions:
∗α = conclusion supported by α
∗Φ = {∗α : α ∈ Φ}
44
@@
@@
@@
@@
@@
@@
@@
@@
��
��
��
��
��
��
��
��
tt
t
t
��
��
��
���
.............
@@
@@
@@
@@I
R
P
>
Q
3. Consider multiple argument extensions
Φ1 = {> ⇒ Q,> ⇒ R,> ⇒ Q → P}
Φ2 = {> ⇒ Q,> ⇒ R,> ⇒ R → ¬P}
4. Skeptical—or “intersect extensions”—option nowbifurcates
Alternative #1:
∗(⋂
{Φ : Φ is an argument extension of Γ})
Alternative #2:⋂
{∗Φ : Φ is an argument extension of Γ}
5. In this case, same result:
{Q,R}
But not always, due to the phenomena of floating
conclusions
45
��
��
��
��
��
��
@@
@@
@@
@@
@@
@@
............................................................
@@
@@
@@
@@
@@
@@
��
��
��
��
��
��
t t
t
t t
t
6 6
R
HD
Q
>
E
Argument Extensions:
Φ1 = { > ⇒ Q, > ⇒ R,
> ⇒ Q → D,
> ⇒ Q → D 6⇒ H,
> ⇒ Q → D ⇒ E }
Φ2 = { > ⇒ Q, > ⇒ R,
> ⇒ R → H,
> ⇒ R → H 6⇒ D,
> ⇒ R → H ⇒ E }
Alternative #1 yields:
{Q, R}
Alternative #2 yields:
{Q, R, E}
46
6. Conventional view is that floating conclusions should
be accepted (so Alternative #2 is correct).
Ginsberg:
Given that both hawks and doves are politi-cally [extreme], Nixon certainly should be aswell.” (Essentials of Artificial Intelligence,
1993)
Makinson and Schlechta:
It is an oversimplification to take a proposi-tion A as acceptable . . . iff it is supportedby some [argument] path α in the intersec-tion of all extensions. Instead A must betaken as acceptable iff it is in the inter-section of all outputs of extensions, wherethe output of an extension is the set of allpropositions supported by some path withinit. (Artificial Intelligence, 1991)
Stein:
The difficulty lies in the fact that some con-clusions may be true in every credulous ex-tension, but supported by different [argu-ment] paths in each. Any path-based the-ory must either accept one of these paths,and be unsound, or reject all such paths,and with them the ideally skeptical conclu-sion (Resolving Ambiguity . . . , 1991)
Pollock:
(Defeasible reasoning, unpublished) makesit clear that desire for floating conclusionsmotivated 1995 semantics
47
7. Yacht example:
• Both (elderly) parents have $500K
• I want a yacht, requires large deposit, balancedue later—otherwise, lose deposit
• Utilities determine conditional preferences:
– If I will inherit at least half a million dollars,I should place a deposit on the yacht
– Otherwise, I should not place a deposit
So decision hinges on truth of
F ∨ M
• Brother says: ”Father will leave his money tome, but Mother is leaving her money to you”
BA(¬F ∧ M )
• Sister says: ”Mother will leave her money tome, but Father is leaving his money to you”
SA(F ∧ ¬M )
• Both brother and sister reliable, so have de-faults:
BA(¬F ∧ M ) → (¬F ∧ M )
SA(F ∧ ¬M ) → (F ∧ ¬M )
48
��
��
��
��
��
��
@@
@@
@@
@@
@@
@@
............................................................
@@
@@
@@
@@
@@
@@
��
��
��
��
��
��
t t
t
t t
t
6 6
SA(F ∧ ¬M)
F ∨ M
BA(¬F ∧ M)
>
¬F ∧ M F ∧ ¬M
Argument Extensions:
Φ1 = { > ⇒ BA(¬F ∧ M ),> ⇒ SA(F ∧ ¬M ),> ⇒ BA(¬F ∧ M ) → ¬F ∧ M ,> ⇒ BA(¬F ∧ M ) → ¬F ∧ M 6⇒ F ∧ ¬M ,
> ⇒ BA(¬F ∧ M ) → ¬F ∧ M ⇒ F ∨ M }
Φ2 = { > ⇒ BA(¬F ∧ M ),> ⇒ SA(F ∧ ¬M ),> ⇒ SA(F ∧ ¬M ) ⇒ F ∧ ¬M ,> ⇒ SA(F ∧ ¬M ) ⇒ F ∧ ¬M 6⇒ ¬F ∧ M ,
> ⇒ SA(F ∧ ¬M ) ⇒ F ∧ ¬M ⇒ F ∨ M }
Alternative #1 yields:
{BA(¬F ∧ M ), SA(F ∧ ¬M )}
Alternative #2 yields:
{BA(¬F ∧ M ), SA(F ∧ ¬M ), F ∨ M }
49
8. Other examples:
• Military example:
– You want to press ahead if enemy has re-treated from defensive position
– Spy 1 says: enemy retreating over moun-tains, diversionary force feigns retreat alongriver
– Spy 2 says: enemy retreating along river,diversionary force feigns retreat over moun-tains
• Economics example:
– Economic health, low inflation, strong growth
– Prediction 1: strong growth will trigger highinflation, leading to recession
– Prediction 2: inflation will continue to de-cline, resulting in deflation and so recession
• Ginsberg’s original example:
– Why not suppose that the extreme tenden-cies serve to moderate each other?
50
9. Why accept floating conclusions?
Maybe an analogy between
B ⊃ A
C ⊃ A
B ∨ C
A
and (supposing two extensions, E1 and E2)
A ∈ E1 (so : E1 ⊃ A)
A ∈ E2 (so : E2 ⊃ A)
E1 ∨ E2 ??????
A
First argument relies on premise that A∨ B; mustskeptical reasoner suppose “E1 ∨ E2”?
Sometimes appropriate to think
• One or another extension must be (entirely)right—we just don’t know which
But other times
• Real possibility that they might all be wrong
51
��
��
��
��
��
��
@@
@@
@@
@@
@@
@@
............................................................
@@
@@
@@
@@
@@
@@
��
��
��
��
��
��
t t
t
t t
t
6 6
Norwegian name
Iceskating
Born Holland
>
Dutch Norwegian
10. Prakken’s example:
Brygt Rykkje was born in Holland
Brygt Rykkje has a Norwegian name
Brygt Rykkje is Dutch
Brygt Rykkje is Norwegian
Here we do like the floating conclusion
11. Open question: how do we distinguish cases inwhich we do like floating conclusion from cases inwhich we don’t?
52
Conclusions
1. What’s good:
It is an actual theory of reasons and theirinteraction
Plausible results in many cases
Plausible resolution to some existing prob-lems
Helps to frame some new problems
2. What’s bad, or still open:
The real definition of defeat is ugly
Lumps together practical and epistemicreasons—an insight or a mistake?
Other linguistic issues, such as generics
.
.
.
53