19
TECHNICAL REPORTS IN COMPUTER SCIENCE Technische Universität Dortmund Proceedings of the 16th International Workshop on Non-Monotonic Reasoning (NMR 2016) April 22 24, 2016 Cape Town, South Africa Editors: Gabriele Kern-Isberner, Lehrstuhl Informatik 1, Technische Universität Dortmund Renata Wassermann, Computer Science Department, University of São Paulo Number: 852 Technische Universität Dortmund — Fakultät für Informatik Otto-Hahn-Str. 14, 44227 Dortmund

Proceedings of the[1.5ex]16th International Workshop[0.5ex

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

T E C H N I C A L R E P O RT S I N C O M P U T E R S C I E N C E

Technische Universität Dortmund

Proceedings of the

16th International Workshopon Non-Monotonic Reasoning

(NMR 2016)April 22 – 24, 2016

Cape Town, South Africa

Editors:Gabriele Kern-Isberner, Lehrstuhl Informatik 1, Technische Universität Dortmund

Renata Wassermann, Computer Science Department, University of São Paulo

Number: 852

Technische Universität Dortmund — Fakultät für InformatikOtto-Hahn-Str. 14, 44227 Dortmund

Gabriele Kern-Isberner, Renata Wassermann (Editors), Proceedings of the 16thInternational Workshop on Non-Monotonic Reasoning (NMR 2016), Cape Town,South Africa; April 22 – 24, 2016. ©2016.

G a b r i e l e K e r n - I s b e r n e r, R e n a t a W a s s e r m a n n : P R E FA C E

Nonmonotonicity is crucial for any formal approach to modelling human rea-soning – most of the conclusions we draw in our everyday lives and on whichwe base (sometimes important) decisions are defeasible, prone to be given upif further information arrives. This is in clear contrast to classical – e.g., propo-sitional or first-order – logics which are monotonic, i.e., their deductive con-clusions are preserved for eternity. It is due to monotonicity that all (correct)proofs in mathematics are still valid, regardless of whatever new theories arebeing developed, but also that robots based on classical logics fail in uncertain,incompletely specified environments.

Therefore, nonmonotonic reasoning (NMR) deals with important issues inArtificial Intelligence, and has strong connections to other areas of knowledgerepresentation, in particular, to belief revision, action logics, argumentation,logic programming, preference handling, and uncertain reasoning. The NMRworkshops are the premier forum for presenting results in this broad subfieldof knowledge representation and reasoning (KR). Their aim is to bring togetheractive researchers, and foster discussions and collaborations on theoretical foun-dations, applications, and system development.

NMR has a long history – it started in 1984, and is held every two years.Recent previous NMR workshops took place in Vienna (2014), Rome (2012),Toronto (2010), and Sydney (2008). Following established and fruitful traditions,NMR 2016 was co-located with the 15th International Conference on Principlesof Knowledge Representation and Reasoning (KR 2016) and the 29th InternationalWorkshop on Description Logics (DL 2016). In particular, NMR 2016 shared a jointsession with DL 2016. We were happy to welcome Laura Giordano (Universitádel Piemonte Orientale) and Leon van der Torre (University of Luxembourg) asinvited speakers, Laura’s talk was also invited by DL 2016.

This volume contains most of the accepted papers of NMR 2016. Some pa-pers had been already published, or are meant to be published elsewhere, sowe could only provide URL’s in those cases. This collection of NMR papersillustrate impressively both the depth and the breadth of NMR by dealing withtheoretical issues as well as connecting different subfields of knowledge repre-sentation and reasoning.

In Studies on Brutal Contraction and Severe Withdrawal: Preliminary Report, MarcoGarapa, Eduardo Fermé, and Maurício Reis study different classes of con-traction operators and provide axiomatic characterizations for them. ZhiqiangZhuang, James Delgrande, Abhaya Nayak, and Abdul Sattar deal with inter-leaving two kinds of nonmonotonicity: in their paper A New Approach for Re-vising Logic Programs, they also allow the logic underlying belief revision op-erations to be nonmonotonic. Also Aaron Hunter’s paper Ordinal ConditionalFunctions for Nearly Counterfactual Revision considers an interesting and oftenneglected issue in belief revision: How can revision by (nearly) counterfactualconditionals be carried out?

Two papers study belief revision in a probabilistic environment: Gavin Rensproposes a unified model of quantitative belief change in his paper On Stochas-

iii

tic Belief Revision and Update and their Combination, and together with ThomasMeyer and Giovanni Casini, he also contributes to this volume by Revising In-completely Specified Convex Probabilistic Belief Bases. Moreover, another two pa-pers deal with connections between nonmonotonic reasoning and belief revi-sion, on the one hand, and ontological reasoning, on the other hand: ÖzgürÖzçep considers Iterated Ontology Revision by Reinterpretation, and ValentinaGliozzi focusses on typicality operators in her paper A strengthening of ratio-nal closure in DLs: reasoning about multiple aspects.

One of the most basic ideas of nonmonotonic reasoning is to order mod-els or states by preference relations. In Preferential Modalities Revisited, KatarinaBritz and Ivan Varzinczak apply such semantic preference relations to modalaccessibility relations, while Kristijonas Cyras and Francesca Toni consider pref-erences in assumption-based argumentation (ABA) frameworks in their paperProperties of ABA+ for Non-Monotonic Reasoning. Jesse Heyninck and ChristianStraßer also emphasize the strong Relations between assumption-based approachesin nonmonotonic logic and formal argumentation. Ringo Baumann, Thomas Lins-bichler, and Stefan Woltran deal with Verifiability of Argumentation Semanticsby elaborating on which specific information some well-known semantics ofabstract argumentation systems can be based. Jean-Guy Mailly’s paper UsingEnthymemes to Fill the Gap between Logical Argumentation and Revision of AbstractArgumentation Frameworks is a first approach to tackle the problem that agentscannot decode arguments correctly due to missing or different (background)knowledge. Thomas Linsbichler, Jörg Pührer, and Hannes Strass aim at Char-acterizing Realizability in Abstract Argumentation by presenting algorithmic ap-proaches that are apt to build up knowledge bases of arguments from a givenset of interpretations. A somehow dual problem is considered in the context ofbelief merging in the paper Distributing Knowledge into Simple Bases by AdrianHaret, Jean-Guy Mailly, and Stefan Woltran: How can a knowledge base ariseby merging simpler knowledge bases in a given fragment of classical logic?

Zeynep Saribatur and Thomas Eiter present a high-level representation for-malism that can be applied to model Reactive Policies with Planning for ActionLanguages. In his extended abstract, Zoltán Ésik studies Equational properties ofstratified least fixed points associated with logic programs. Adrian Paschke andTara Athan show in their paper Law Test Suites for Semantically-Safe Rule Inter-change how basic principles of nonmonotonic reasoning can be adapted to ver-ify a particular semantics. In Static and Dynamic Views on the Algebra of ModularSystems, Eugenia Ternovska elaborates on properties of a knowledge represen-tation framework called Algebra of Modular Systems; in particular, she usesthe algebra for a high-level encoding of problem solving on graphs.

iv

A C K N O W L E D G M E N T S

First of all, we would like to thank the members of our Program Committeewho provided us with excellent reviews on time. We are also grateful to theSouth African Department of Science and Technology (DST), the Council forScientific and Industrial Research (CSIR) in South Africa, The South AfricanCentre for Artificial Intelligence Research (CAIR), Principles of KnowledgeRepresentation and Reasoning, Incorporated (KR, Inc.), and the European As-sociation for Artifical Intelligence (EurAI, formerly ECCAI) for supporting ourworkshop by donations. The team of local organizers – Tommie Meyer in thefirst place – did a perfect job to help us making NMR 2016 a great event, thanksto all of them.

Last, but not least, we wish to thank Christian Eichhorn who took care of theNMR 2016 website and set up these proceedings, and the Technische Univer-sität Dortmund where the proceedings were published as a technical report ofthe Faculty of Computer Science.

P R O G R A M C O M M I T T E E O F N M R 2 0 1 6

Gabriele Kern-Isberner, Technische Universität Dortmund (Co-Chair)Renata Wassermann, University of São Paulo (Co-Chair)Christoph Beierle, University of HagenAlexander Bochman, Holon Institute of TechnologyGerhard Brewka, Leipzig UniversityJan Broersen, Utrecht UniversityMarina De Vos, University of BathMarc Denecker, K.U.LeuvenJuergen Dix, Clausthal University of TechnologyPaul Dunne, University of LiverpoolWolfgang Faber, University of HuddersfieldEduardo Fermé, Universidade da MadeiraMartin Gebser, University of PotsdamMichael Gelfond, Texas Tech UniversityValentina Gliozzi, Universitá di TorinoLluis Godo, Artificial Intelligence Research Institute, BarcelonaSven Ove Hansson, Royal Institute of Technology, StockholmAndreas Herzig, Université Paul Sabatier, ToulouseAnthony Hunter, University College LondonKatsumi Inoue, National Institute of Informatics, JapanTomi Janhunen, Aalto UniversitySébastien Konieczny, Université d’ArtoisGerhard Lakemeyer, Aachen University of TechnologyThomas Lukasiewicz, University of OxfordMaria Vanina Martinez, Universidad Nacional del Sur (Bahia Blanca)Thomas Meyer, University of Cape Town

v

Nir Oren, University of AberdeenMaurice Pagnucco, The University of New South WalesOdile Papini, Aix-Marseille UniversitéPavlos Peppas, University of Technology, SydneyLaurent Perrussel, Université de ToulouseRamon Pino Perez, Universidad de Los AndesHenri Prade, Université Paul Sabatier, ToulouseKen Satoh, National Institute of Informatics, JapanLuigi Sauro, University of Naples "Federico II"Steven Schockaert, Cardiff UniversityGerardo Simari, Universidad Nacional del Sur (Bahia Blanca)Guillermo Simari, Universidad Nacional del Sur (Bahia Blanca)Hannes Strass, Leipzig UniversityHeiner Stuckenschmidt, University of MannheimEvgenia Ternovska, Simon Fraser UniversityMatthias Thimm, Universität Koblenz-LandauMirek Truszczynski, University of KentuckyIvan Varzinczak, Universidade Federal do Rio de JaneiroJoost Vennekens, K.U. LeuvenSerena Villata, INRIA Sophia AntipolisKewen Wang, Griffith UniversityEmil Weydert, University of LuxembourgStefan Woltran, TU Wien

vi

C O N T E N T S

Invited Talks

Laura Giordano: Reasoning about typicality in preferentialdescription logics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Leon van der Torre: Arguing about obligations and permissions . . . . . . 3

Regular Papers

Ringo Baumann, Thomas Linsbichler and Stefan Woltran: Verifiabilityof Argumentation Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Katarina Britz, Ivan Varzinczak: Preferential Modalities Revisited . . . . 15

Kristijonas Cyras, Francesca Toni: Properties of ABA+ forNon-Monotonic Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Zoltan Esik: Equational properties of stratified least fixed points . . . . 35

Marco Garapa, Eduardo Fermé, Maurício D. L. Reis: Studies on BrutalContraction and Severe Withdrawal: Preliminary Report . . . . . . . . . . . 37

Valentina Gliozzi: A strengthening of rational closure in DLs:reasoning about multiple aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47

Adrian Haret, Jean-Guy Mailly and Stefan Woltran: DistributingKnowledge into Simple Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Jesse Heyninck, Christian Straßer: Relations between assumption-basedapproaches in nonmonotonic logic and formal argumentation. . . . . .65

Aaron Hunter: Ordinal Conditional Functions for NearlyCounterfactual Revision. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .77

Thomas Linsbichler, Jörg Pührer, Hannes Strass: CharacterizingRealizability in Abstract Argumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

Jean-Guy Mailly: Using Enthymemes to Fill the Gap betweenLogical Argumentation and Revision of Abstract ArgumentationFrameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

Özgür L. Özçep: Iterated Ontology Revision by Reinterpretation . . . 105

vii

viii contents

Adrian Paschke and Tara Athan: Law Test Suites for Semantically-SafeRule Interchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

Gavin Rens: On Stochastic Belief Revision and Update and theirCombination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

Gavin Rens, Thomas Meyer, Giovanni Casini: Revising IncompletelySpecified Convex Probabilistic Belief Bases . . . . . . . . . . . . . . . . . . . . . . . . 133

Zeynep G. Saribatur and Thomas Eiter: Reactive Policies withPlanning for Action Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Eugenia Ternovska: Static and Dynamic Views on the Algebra ofModular Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

Zhiqiang Zhuang, James Delgrande, Abhaya Nayak, Abdul Sattar: ANew Approach for Revising Logic Programs . . . . . . . . . . . . . . . . . . . . . . 163

Revising Incompletely Specified Convex Probabilistic Belief Bases

Gavin RensCAIR⇤,

University of KwaZulu-Natal,School of Mathematics, Statistics and Comp. Sci.

CSIR Meraka, South AfricaEmail: [email protected]

Thomas MeyerCAIR,

University of Cape Town,Dept. of Comp. Sci.

CSIR Meraka, South AfricaEmail: [email protected]

Giovanni CasiniUniversity of Luxembourg,

Comp. Sci. and Communication Research UnitLuxembourg

Email: [email protected]

Abstract

We propose a method for an agent to revise its incom-plete probabilistic beliefs when a new piece of proposi-tional information is observed. In this work, an agent’sbeliefs are represented by a set of probabilistic formulae– a belief base. The method involves determining a rep-resentative set of ‘boundary’ probability distributionsconsistent with the current belief base, revising each ofthese probability distributions and then translating therevised information into a new belief base. We use aversion of Lewis Imaging as the revision operation. Thecorrectness of the approach is proved. The expressiv-ity of the belief bases under consideration are ratherrestricted, but has some applications. We also discussmethods of belief base revision employing the notionof optimum entropy, and point out some of the bene-fits and difficulties in those methods. Both the boundarydistribution method and the optimum entropy methodare reasonable, yet yield different results.

Suppose an agent represents its probabilistic knowledgewith a set of statements; every statement says somethingabout the probability of some features the agent is aware of.Ideally, the agent would want to have enough informationto, at least, identify one probability distribution over all thesituations (worlds) it deems possible. However, if the agentcould not gather sufficient data or if it was not told or givensufficient information, it would not be able to pinpoint ex-actly one probability distribution. An agent with this sort ofignorance, can be thought of as having beliefs compatiblewith a set of distributions. Now, this agent might need to re-vise its beliefs when new (non-probabilistic) information isreceived, even though the agent’s beliefs do not characterizea particular probability distribution over its current possibleworlds.

Several researchers argue that using a single proba-bility distribution requires the agent to make unrealisti-cally precise uncertainty distinctions (Grove and Halpern,1998; Voorbraak, 1999; Yue and Liu, 2008).1 “One widely-used approach to dealing with this has been to consider

⇤Centre for Artificial Intelligence ResearchCopyright c� 2015, Association for the Advancement of ArtificialIntelligence (www.aaai.org). All rights reserved.

1See also the references in these cited papers concerning criti-cisms against traditional probability theory.

sets of probability measures as a way of modeling uncer-tainty,” (Grove and Halpern, 1998). However, simply apply-ing standard probabilistic conditioning to each of the mea-sures/distributions in the set individually and then combin-ing the results is also not recommended. The framework pre-sented in this paper proposes two ways to go from one ‘prob-abilistically incomplete’ belief base to another when new in-formation is acquired.

Both belief revision methods presented, essentially followthis process: From the original belief base, determine a rel-atively small set of belief states / probability distributions‘compatible’ with the belief base which is, in a sense, rep-resentative of the belief base. (We shall use the terms beliefstate, probability distribution, probability function and dis-tribution interchangeably). Then revise every belief state inthis representative set. Finally, induce a new, revised beliefbase from the revised representative set.

We shall present two approaches to determine the repre-sentative set of belief states from the current belief base:(i) The approach we focus on involves finding belief stateswhich, in a sense, are at the boundaries of the constraints im-plied by the belief base. These ‘boundary belief states’ canbe thought of as drawing the outline of the convex space ofbeliefs. This outline is then revised to form a new outlineshape, which can be translated into a new belief base. (ii)As a possible alternative approach, the representative set is asingle belief state which can be imagined to be at the centerof the outline of the first approach. This ‘central’ belief stateis found by determining the one in the space of beliefs whichis least biased or most entropic in terms of information the-ory (Jaynes, 1978; Cover and Thomas, 1991).

For approach (i) – where the canonical set is the set ofboundary belief states – we shall prove that the revisedcanonical set characterizes the set of all belief states whichwould have resulted from revising all (including interior) be-lief states compatible with the original belief base.

The relevant background theory and notations are now in-troduced.

We shall work with classical propositional logic. Let Pbe the finite set of atomic propositional variables (atoms,for short). Formally, a world is a unique assignment of truthvalues to all the atoms in P . There are thus 2n conceivableworlds. An agent may consider some non-empty subset W

133

of the conceivable worlds called the possible worlds. Often,in the exposition of this paper, a world will be referred toby its truth vector. For instance, if the vocabulary is placedin order hq, ri and w3 � ¬q ^ r, then w3 may be referredto as 01.2 Let L be all propositional formulae which can beformed from P and the logical connectives ^ and ¬, with >abbreviating tautology and ? abbreviating contradiction.

Let � be a sentence in L. [�] denotes the set of �-worlds,that is, the elements of W satisfying �. The worlds satisfyingall sentences in a set of sentences K are denoted by [K].

We define the probabilistic language Lprob= {(↵) ./ x |

↵ 2 L, ./2 {,=,�}, x 2 [0, 1]}. Sentences with strictinequalities (<,>) are excluded from the language for now.Such sentences are more challenging to deal with and theirinclusion is left for future work. We propose a belief base(BB) to be a consistent (logically satisfiable) subset of Lprob .A BB specifies an agent’s knowledge.

The basic semantic element of an agent’s beliefs is a prob-ability distribution or a belief state

b = {(w1, p1), (w2, p2), . . . , (wn, pn)},where pi is the probability that wi is the actual worldin which the agent is.

P

(w,p)2b p = 1. We may alsouse c to refer to a belief state. For parsimony, let b =

hp1, . . . , pni be the probabilities that belief state b assigns tow1, . . . , wn where hw1, w2, w3, w4i = h11, 10, 01, 00i, andhw1, w2, . . . , w8i= h111, 110, . . . , 000i. Let ⇧ be the set ofall belief states over W .

b(↵) abbreviatesP

w2W,w�↵ b(w). b satisfies formula(↵) ./ x (denoted b � (↵) ./ x) iff b(↵) ./ x. If B isa set of formulae, then b satisfies B (denoted b � B) iff8� 2 B, b � �. If B and B0 are sets of formulae, then Bentails B0 (denoted B |= B0) iff for all b 2 ⇧, b � B0 when-ever b � B. If B |= {�} then we simply write B |= �. Bis logically equivalent to B0 (denoted B ⌘ B0) iff B |= B0

and B0 |= B.Instead of an agent’s beliefs being represented by a sin-

gle belief state, a BB B represents a set of belief-states: Let⇧

B:= {b 2 ⇧ | b � B}. A BB B is satisfiable (consistent)

iff ⇧B 6= ;.

The technique of Lewis imaging for the revision of beliefstates, requires a notion of distance between worlds to bedefined. We use a pseudo-distance measure between worlds,as defined by Lehmann, Magidor, and Schlechta (2001) andadopted by Chhogyal et al. (2014).

We add a ‘faithfulness’ condition, which we feel is lack-ing from the definition of Lehmann, Magidor, and Schlechta(2001): without this condition, a pseudo-distance measurewould allow all worlds to have zero distance between them.Boutilier (1998) mentions this condition, and we use his ter-minology: “faithfulness”.Definition 1. A pseudo-distance function d : W ⇥ W !Z satisfies the following four conditions: for all worldsw,w0, w00 2 W ,

1. d(w,w0) � 0 (Non-negativity)

2w � ↵ is read ‘w is a model for/satisfies ↵’.

2. d(w,w) = 0 (Identity)3. d(w,w0

) = d(w0, w) (Symmetry)4. d(w,w0

)+d(w0, w00) � d(w,w00

) (Triangular Inequality)5. if w 6= w0, then d(w,w0

) > 0 (Faithfulness)

Presently, the foundation theory, or paradigm, for study-ing belief change operations is commonly known as AGMtheory (Alchourron, Gardenfors, and Makinson, 1985;Gardenfors, 1988). Typically, belief change (in a staticworld) can be categorized as expansion, revision or contrac-tion, and is performed on a belief set, the set of sentences Kclosed under logical consequence. Expansion (denoted +)is the logical consequences of K [ {↵}, where ↵ is new in-formation and K is the current belief set. Contraction of ↵is the removal of some sentences until ↵ cannot be inferredfrom K. It is the reduction of beliefs. Revision is when ↵ is(possibly) inconsistent with K and K is (minimally) modi-fied so that the new K remains consistent and entails ↵. Inthis view, when the new information is consistent with theoriginal beliefs, expansion and revision are equivalent.

The next section presents a generalized imaging methodfor revising probabilistic belief states. Then we describe theapplication of generalized imaging in our main contribution;revising boundary belief states instead of all belief states.The subsequent section explain another approaches of revis-ing our belief bases, which prepares us for discussions inthe rest of the paper. The latter method finds a single rep-resentative belief state through maximum entropy inference.Both the boundary belief state method and the maximum en-tropy method are reasonable, yet yield different results – aseeming paradox is thus uncovered. Then future possible di-rections of research are discussed. We end with a section onthe related work and the concluding section.

Generalized ImagingIt is not yet universally agreed what revision means in aprobabilistic setting. One school of thought says that prob-abilistic expansion is equivalent to Bayesian conditioning.This is evidenced by Bayesian conditioning (BC) being de-fined only when b(↵) 6= 0, thus making BC expansionequivalent to BC revision. In other words, one could defineexpansion (restricted revision) to be

b BC ↵ = {(w, p) | w 2 W, p = b(w | ↵), b(↵) 6= 0}.To accommodate cases where b(↵) = 0, that is, where ↵

contradicts the agent’s current beliefs and its beliefs need tobe revised in the stronger sense, we shall make use of imag-ing. Imaging was introduced by Lewis (1976) as a means ofrevising a probability function. It has also been discussedin the work of, for instance, Gardenfors (1988); Duboisand Prade (1993); Chhogyal et al. (2014); Rens and Meyer(2015). Informally, Lewis’s original solution for accommo-dating contradicting evidence ↵ is to move the probabilityof each world to its closest, ↵-world. Lewis made the strongassumption that every world has a unique closest ↵-world.More general versions of imaging allows worlds to have sev-eral, equally proximate, closest worlds.

134 revising incompletely specified convex probabilistic belief bases

Gardenfors (1988) calls one of his generalizations ofLewis’s imaging general imaging. Our method is also a gen-eralization. We thus refer to his as Gardenfors’s generalimaging and to our method as generalized imaging to dis-tinguish them. It should be noted that all three these imag-ing methods are general revision methods and can be used inplace of Bayesian conditioning for expansion. “Thus imag-ing is a more general method of describing belief changesthan conditionalization,” (Gardenfors, 1988, p. 112).

Let Min(↵, w, d) be the set of ↵-worlds closest to w withrespect to pseudo-distance d. Formally,

Min(↵, w, d) :=

{w0 2 [↵] | 8w00 2 [↵], d(w0, w) d(w00, w)},where d(·) is some pseudo-distance measure betweenworlds (e.g., Hamming or Dalal distance).Example 1. Let the vocabulary be {q, r, s}. Let ↵ be (q ^r) _ (q ^ ¬r ^ s). Suppose d is Hamming distance. Then

Min((q ^ r) _ (q ^ ¬r ^ s), 111, d) = {111}Min((q ^ r) _ (q ^ ¬r ^ s), 110, d) = {110}Min((q ^ r) _ (q ^ ¬r ^ s), 101, d) = {101}Min((q ^ r) _ (q ^ ¬r ^ s), 100, d) = {110, 101}Min((q ^ r) _ (q ^ ¬r ^ s), 011, d) = {111}Min((q ^ r) _ (q ^ ¬r ^ s), 010, d) = {110}Min((q ^ r) _ (q ^ ¬r ^ s), 001, d) = {101}Min((q ^ r) _ (q ^ ¬r ^ s), 000, d) = {110, 101}

⇤Definition 2 (GI). Then generalized imaging (denoted GI) isdefined as

b GI ↵ := {(w, p) | w 2 W, p = 0 if w 62 [↵],

else p =

X

w02Ww2Min(↵,w0,d)

b(w0)/|Min(↵, w0, d)|}.

In words, b GI ↵ is the new belief state produced by tak-ing the generalized image of b with respect to ↵. Notice howthe probability mass of non-↵-worlds is shifted to their clos-est ↵-worlds. If a non-↵-world w⇥ with probability p has nclosest ↵-worlds (equally distant), then each of these closest↵-worlds gets p/n mass from w⇥.

We define b�↵ := b � ↵ so that we can write b�↵(w), where� is a revision operator.Example 2. Continuing on Example 1: Let b =

h0, 0.1, 0, 0.2, 0, 0.3, 0, 0.4i.(q ^ r) _ (q ^ ¬r ^ s) is abbreviated as ↵.

bGI↵ (111) =

P

w02W1112Min(↵,w0,d)

b(w0)/|Min(↵, w0, d)|

= b(111)/|Min(↵, 111, d)| + b(011)/|Min(↵, 011, d)| =

0/1 + 0/1 = 0.

bGI↵ (110) =

P

w02W1102Min(↵,w0,d)

b(w0)/|Min(↵, w0, d)|

= b(110)/|Min(↵, 110, d)| + b(100)/|Min(↵, 100, d)| +

b(010)/|Min(↵, 010, d)| + b(000)/|Min(↵, 000, d)| =

0.1/1 + 0.2/2 + 0.3/1 + 0.4/2 = 0.7.

bGI↵ (101) =

P

w02W1012Min(↵,w0,d)

b(w0)/|Min(↵, w0, d)|

= b(101)/|Min(↵, 101, d)| + b(100)/|Min(↵, 100, d)| +b(001)/|Min(↵, 001, d)| + b(000)/|Min(↵, 000, d)| =

0/1 + 0.2/2 + 0/1 + 0.4/2 = 0.3.

And bGI↵ (100) = bGI↵ (011) = bGI↵ (010) = bGI↵ (001) =

bGI↵ (000) = 0. ⇤

Revision via GI and boundary belief statesPerhaps the most obvious way to revise a given belief base(BB) B is to revise every individual belief state in ⇧B andthen induce a new BB from the set of revised belief states.Formally, given observation ↵, first determine a new beliefstate b↵ for every b 2 ⇧B via the defined revision operation:

B↵

= {b↵ 2 ⇧ | b↵ = b GI ↵, b 2 ⇧B}.If there is more than only a single belief state in ⇧B , then⇧

B contains an infinite number of belief states. Then howcan one compute ⇧B↵

? And how would one subsequentlydetermine B↵ from ⇧B↵

?In the rest of this section we shall present a finite method

of determining⇧B↵

. What makes this method possible is theinsight that ⇧B can be represented by a finite set of ‘bound-ary’ belief states – those belief states which, in a sense, rep-resent the limits or the convex hull of ⇧B . We shall provethat the set of revised boundary belief states defines ⇧B↵

.Inducing B↵ from ⇧

B↵

is then relatively easy, as will beseen.

Let W perm be every permutation on the ordering ofworlds in W . For instance, if W = {w1, w2, w3, w4},then W perm

= {hw1, w2, w3, w4i, hw1, w2, w4, w3i,hw1, w3, w2, w4i, . . ., hw4, w3, w2, w1i}. Given an orderingW# 2 W perm , let W#

(i) be the i-th element of W#; forinstance, hw4, w3, w2, w1i(2) = w3. Suppose we are givena BB B. We now define a function which, given a permuta-tion of worlds, returns a belief state where worlds earlier inthe ordering are assigned maximal probabilities according tothe boundary values enforced by B.Definition 3. MaxASAP(B,W#

) is the b 2 ⇧

B suchthat for i = 1, . . . , |W |, 8b0 2 ⇧

B , if b0 6= b, thenPi

j=1 b(W#(j)) � Pi

k=1 b0(W#

(k)).

Example 3. Suppose the vocabulary is {q, r} and B1 =

{(q) � 0.6}. Then, for instance, MaxASAP(B1, h01,00, 11, 10i) = {(01, 0.4), (00, 0), (11, 0.6), (10, 0)} =

{(11, 0.6), (10, 0), (01, 0.4), (00, 0)}. ⇤Definition 4. We define the boundary belief states of BB Bas the set

Bbnd := {b 2 ⇧B |

W# 2 W perm , b = MaxASAP(B,W#)}

Note that |⇧Bbnd | |W perm |.

135

Example 4. Suppose the vocabulary is {q, r} and B1 =

{(q) � 0.6}. Then

B1bnd = {{(11, 1.0), (10, 0.0), (01, 0.0), (00, 0.0)},

{(11, 0.0), (10, 1.0), (01, 0.0), (00, 0.0)},{(11, 0.6), (10, 0.0), (01, 0.4), (00, 0.0)},{(11, 0.6), (10, 0.0), (01, 0.0), (00, 0.4)},{(11, 0.0), (10, 0.6), (01, 0.4), (00, 0.0)},{(11, 0.0), (10, 0.6), (01, 0.0), (00, 0.4)}}.

⇤Next, the revision operation is applied to every belief state

in ⇧Bbnd . Let (⇧B

bnd)GI↵ := {b0 2 ⇧ | b0 = bGI↵ , b 2 ⇧B

bnd}.Example 5. Suppose the vocabulary is {q, r} and B1 =

{(q) � 0.6}. Let ↵ be (q ^ ¬r) _ (¬q ^ r). Then

(⇧

B1bnd)

GI↵ = {{(11, 0.0), (10, 0.5), (01, 0.5), (00, 0.0)},

{(11, 0.0), (10, 1.0), (01, 0.0), (00, 0.0)},{(11, 0.0), (10, 0.3), (01, 0.7), (00, 0.0)},{(11, 0.0), (10, 0.6), (01, 0.4), (00, 0.0)},{(11, 0.0), (10, 0.8), (01, 0.2), (00, 0.0)}}.

(Two revision operations produce{(11, 0), (10, 0.5), (01, 0.5), (00, 0)}.) ⇤

To induce the new BB B↵bnd from (⇧

Bbnd)

GI↵ , the following

procedure is executed. For every possible world, the pro-cedure adds a sentence enforcing the upper (resp., lower)probability limit of the world, with respect to all the revisedboundary belief states. Trivial limits are excepted.

For every w 2 W , (�w) y 2 B↵, where y =

maxb2(⇧Bbnd )

GI↵b(w), except when y = 1, and (�w) �

y 2 B↵, where y = minb2(⇧Bbnd )

GI↵b(w), except when

y = 0.

The intention is that the procedure specifies B↵ to repre-sent the upper and lower probability envelopes of the set ofrevised boundary belief states – B↵ thus defines the entirerevised belief state space (cf. Theorem 1).Example 6. Continuing Example 5, using the translationprocedure just above, we see that B↵

1bnd = {(�11) 0,(�10) � 0.3, (�01) 0.7, (�00) 0.0}.

Note that if we let B0= {((q ^ ¬r) _ (¬q ^ r)) = 1,

(q ^ ¬r) � 0.3}, then ⇧B0= ⇧

B↵1bnd . ⇤

Example 7. Suppose the vocabulary is {q, r} and B2 =

{(¬q ^ ¬r) = 0.1}. Let ↵ be ¬q. Then

B2bnd = {{(11, 0.9), (10, 0), (01, 0), (00, 0.1)},

{(11, 0), (10, 0.9), (01, 0), (00, 0.1)},{(11, 0), (10, 0), (01, 0.9), (00, 0.1)}},

(⇧

B2bnd)

GI↵ = {{(11, 0), (10, 0), (01, 0.9), (00, 0.1)},

{(11, 0), (10, 0), (01, 0), (00, 1)}} and

B↵2bnd = {(�11) 0, (�10) 0, (�01) 0.9, (�00) �

0.1}.Note that if we let B0

= {(¬q) = 1, (¬q^r) 0.9}, then⇧

B0= ⇧

B↵2bnd . ⇤

Let WMin(↵,d) be a partition of W suchthat {wi

1, . . . , wini} is a block in WMin(↵,d) iff

|Min(↵, wi1, d)| = · · · = |Min(↵, wi

ni, d)|. Denote anelement of block {wi

1, . . . , wini} as wi, and the block of

which wi is an element as [wi]. Let i = |Min(↵, wi, d)|,

in other words, the superscript in wi indicates the size ofMin(↵, wi, d). Let m := maxw2W |Min(↵, w, d)|.Observation 1. Let �1, �2, . . . , �m be positive integers suchthat i < j iff �i < �j . Let ⌫1, ⌫2, . . . , ⌫m be values in [0, 1]such that

Pmk=1 ⌫k = 1. Associate with every ⌫i a maximum

value it is allowed to take: most(⌫i). For every ⌫i, we definethe assignment value

av(⌫i) :=

most(⌫i) ifPi

k=1 1

1�Pi�1k=1 otherwise

Determine first av(⌫1), then av(⌫2) and so on. Then

av(⌫1)

�1+ · · ·+ av(⌫m)

�m>⌫01�1

+ · · ·+ ⌫0m�m

whenever ⌫0i 6= av(⌫i) for some i. ⇤For instance, let �1 = 1, �2 = 2, �3 = 3, �4 = 4.

Let most(⌫1) = 0.5, most(⌫2) = 0.3, most(⌫3) = 0.2,most(⌫4) = 0.3. Then av(⌫1) = 0.5, av(⌫2) = 0.3,av(⌫3) = 0.2, av(⌫4) = 0 and

0.5

1

+

0.3

2

+

0.2

3

+

0

4

= 0.716.

But0.49

1

+

0.3

2

+

0.2

3

+

0.01

4

= 0.709.

And0.5

1

+

0.29

2

+

0.2

3

+

0.01

4

= 0.714.

Lemma 1 essentially says that the belief state in⇧B whichcauses a revised belief state to have a maximal value at worldw (w.r.t. all belief states in ⇧B), will be in ⇧B

bnd .Lemma 1. For all w 2 W ,argmaxbX2⇧B

P

w02Ww2Min(↵,w0,d)

bX(w0)/|Min(↵, w0, d)| is

in ⇧Bbnd .

Proof. Note thatX

w02Ww2Min(↵,w0,d)

b(w0)/|Min(↵, w0, d)|

can be written in the formP

w02[w1]w2Min(↵,w0,d)

b(w0)

1

+ · · ·+

P

w02[wm]w2Min(↵,w0,d)

b(w0)

m.

Observe that there must be a W# 2 W perm such thatW#

= hw11, . . . , w

1n1, . . . , w

m1 , . . . , wm

nmi. Then by the

136 revising incompletely specified convex probabilistic belief bases

definition of the set of boundary belief states (Def. 4),MaxASAP(B,W#

) will assign maximal probability massto [w1

] = {w11, . . . , w

1n1}, then to [w2

] = {w21, . . . , w

mn2}

and so on.That is, by Observation 1, for some bx 2 ⇧B

bnd , bx(w) =maxbX2⇧B

P

w02Ww2Min(↵,w0,d)

bX(w0)/|Min(↵, w0, d)|

for all w 2 W . Therefore,argmaxbX2⇧B

P

w02Ww2Min(↵,w0,d)

bX(w0)/|Min(↵, w0, d)| is

in ⇧Bbnd .

Letxw

:= maxb2⇧Bbnd

b(w) Xw:= maxb2⇧B b(w)

yw := maxb2(⇧Bbnd )

GI↵

b(w) Yw:= maxb2(⇧B)GI

↵b(w)

xw:= minb2⇧B

bndb(w) Xw

:= minb2⇧B b(w)

yw := minb2(⇧Bbnd )

GI↵

b(w) Y w:= minb2(⇧B)GI

↵b(w)

Lemma 2 states that for every world, the upper/lowerprobability of the world with respect to ⇧B

bnd is equal to theupper/lower probability of the world with respect to⇧B . Theproof requires Observation 1 and Lemma 1.

Lemma 2. For all w 2 W , yw = Yw

and yw = Y w.

Proof. Note that if w 62 [↵], then yw = Yw= 0 and yw =

Y w= 0.

We now consider the cases where w 2 [↵].

yw = Yw

iffmax

b2(⇧Bbnd )

b(w) = max

b2(⇧B)b(w)

iff

max

bx2⇧Bbnd

X

w02Ww2Min(↵,w0,d)

bx(w0)/|Min(↵, w0, d)|

= max

bX2⇧B

X

w02Ww2Min(↵,w0,d)

bX(w0)/|Min(↵, w0, d)|

ifbx(w) = bX(w), where

bx(w) := max

bx2⇧Bbnd

X

w02Ww2Min(↵,w0,d)

bx(w0)/|Min(↵, w0, d)|

and

bX(w) := max

bX2⇧B

X

w02Ww2Min(↵,w0,d)

bX(w0)/|Min(↵, w0, d)|.

Note thatX

w02Ww2Min(↵,w0,d)

b(w0)/|Min(↵, w0, d)|

can be written in the formP

w02[w1]w2Min(↵,w0,d)

b(w0)

1

+ · · ·+

P

w02[wm]w2Min(↵,w0,d)

b(w0)

m.

Then by Observation 1, bX(w) is in ⇧Bbnd . And also by

Lemma 1, the belief state in ⇧Bbnd identified by bX(w) must

be the one which maximizesX

w02Ww2Min(↵,w0,d)

bx(w0)/|Min(↵, w0, d)|,

where bx 2 ⇧Bbnd . That is, bx = bX .

With a symmetrical argument, it can be shown that yw =

Y w.

In intuitive language, the following theorem says that theBB determined through the method of revising boundary be-lief states captures exactly the same beliefs and ignorance asthe belief states in ⇧B which have been revised. This corre-spondence relies on the fact that the upper and lower prob-ability envelopes of ⇧B can be induce from ⇧

Bbnd , which is

what Lemma 2 states.

Theorem 1. Let (⇧B)

GI↵ := {bGI↵ 2 ⇧ | b 2 ⇧B}. Let B↵

bnd

be the BB induced from (⇧

Bbnd)

GI↵ . Then ⇧B↵

bnd= (⇧

B)

GI↵ .

Proof. We show that 8b02⇧, b02⇧B↵bnd () b02(⇧

B)

GI↵ .

()) b0 2 ⇧B↵bnd implies 8w 2 W , yw b0(w) yw

(by definition of B↵bnd ). Lemma 2 states that for all w 2 W ,

yw = Yw

and yw = Y w. Hence, 8w 2 W , Y w b0(w) Y

wTherefore, b0(w) 2 (⇧

B)

GI↵ .

(() b0(w) 2 (⇧

B)

GI↵ implies 8w 2 W , Y w b0(w)

Yw

. Hence, by Lemma 2, 8w 2 W , yw b0(w) yw.Therefore, by definition of B↵

bnd , b02⇧B↵bnd .

Revising via a Representative Belief StateAnother approach to the revision of a belief base (BB) is todetermine a representative of ⇧B (call it brep), change therepresentative belief state via the the defined revision op-eration and then induce a new BB from the revised repre-sentative belief state. Selecting a representative probabilityfunction from a family of such functions is not new (Gold-szmidt, Morris, and Pearl, 1990; Paris, 1994, e.g.). More for-mally, given observation ↵, first determine brep 2 ⇧B , thencompute its revision b↵rep , and finally induce B↵ from b↵rep .

We shall represent ⇧B (and thus B) by the single ‘leastbiased’ belief state, that is, the belief state in ⇧B with high-est entropy:

Definition 5 (Shannon Entropy).

H(b) := �X

w2W

b(w) ln b(w),

where b is a belief state.

137

Definition 6 (Maximum Entropy). Traditionally, givensome set of distributions ⇧, the most entropic distributionin ⇧ is defined as

bH := argmax

b2⇧H(b).

Suppose B2 = {(¬q ^ ¬r) = 0.1}. Then the belief stateb 2 ⇧B2 satisfying the constraints posed by B2 for whichH(b) is maximized is brep = bH = h0.3, 0.3, 0.3, 0.1i.

The above distribution can be found directly by applyingthe principle of maximum entropy: The true belief state isestimated to be the one consistent with known constraints,but is otherwise as unbiased as possible, or “Given no otherknowledge, assume that everything is as random as possible.That is, the probabilities are distributed as uniformly as pos-sible consistent with the available information,” (Poole andMackworth, 2010). Obviously world 00 must be assignedprobability 0.1. And the remaining 0.9 probability massshould be uniformly spread across the other three worlds.

Applying GI to brep on evidence ¬q results in b¬qrep = h0,

0, 0.6, 0.4i.Example 8. Suppose the vocabulary is {q, r}, B1 =

{(q) � 0.6} and ↵ is (q ^ ¬r) _ (¬q ^ r). Then brep =

argmaxb2⇧B1 H(b) = h0.3, 0.3, 0.2, 0.2i. Applying GI tobrep on ↵ results in b↵rep = h0, 0.61, 0.39, 0i. b↵rep can betranslated into B↵

1rep as {(q^¬r) = 0.61, (¬q^r) = 0.39}.⇤

Still using ↵ = (q ^ ¬r) _ (¬q ^ r), notice that⇧

B↵1rep 6= ⇧B↵

1bnd . But how different are B↵1rep = {(q^¬r) =

0.61, (¬q ^ r) = 0.39} and B↵1bnd = {(q ^ r) 0,

(q^¬r) � 0.3, (¬q^ r) 0.7, (¬q^¬r) 0.0}? Perhapsone should ask, how different B↵

1rep is from the representa-tive of B↵

1bnd : The least biased belief state satisfying B↵1bnd

is h0, 0.5, 0.5, 0i. That is, How different are h0, 0.61, 0.39, 0iand h0, 0.5, 0.5, 0i?

In the case of B2, we could compare B¬q2bnd = {(�11)

0, (�10) 0, (�01) 0.9, (�00) � 0.1} with b¬qrep = h0, 0,

0.6, 0.4i. Or if we take the least biased belief state satisfyingB¬q

2bnd , we can compare h0, 0, 0.5, 0.5i with h0, 0, 0.6, 0.4i.It has been extensively argued (Jaynes, 1978; Shore and

Johnson, 1980; Paris and Vencovsk, 1997) that maximumentropy is a reasonable inference mechanism, if not the mostreasonable one (w.r.t. probability constraints). And in thesense that the boundary belief states method requires nocompression / information loss, it also seems like a very rea-sonable inference mechanism for revising BBs as definedhere. Resolving this misalignment in the results of the twomethods is an obvious task for future research.

Future DirectionsSome important aspects still missing from our frameworkare the representation of conditional probabilistic informa-tion such as is done in the work of Kern-Isberner, and theassociation of information with its level of entrenchment.On the latter point, when one talks about probabilities orlikelihoods, if one were to take a frequentist perspective, in-formation observed more (less) often should become more

(less) entrenched. Or, without considering observation fre-quencies, an agent could be designed to have, say, one ortwo sets of deeply entrenched background knowledge (e.g.,domain constraints) which does not change or is more im-mune to change than ‘regular’ knowledge.

Given that we have found that the belief base result-ing from revising via the boundary-belief-states approachdiffers from the belief base resulting from revising viathe representative-belief-state approach, the question arises,When is it appropriate to use a representative belief state de-fined as the most entropic belief state of a given set ⇧B?This is an important question, especially due to the popular-ity of employing the Maximum Entropy principle in cases ofundespecified probabilistic knowledge (Jaynes, 1978; Gold-szmidt, Morris, and Pearl, 1990; Hunter, 1991; Voorbraak,1999; Kern-Isberner, 2001; Kern-Isberner and Rdder, 2004)and the principle’s well-behavedness (Shore and Johnson,1980; Paris, 1994; Kern-Isberner, 1998).

Katsuno and Mendelzon (1991) modified the eight AGMbelief revision postulates (Alchourron, Gardenfors, andMakinson, 1985) to the following six (written in the nota-tion of this paper), where ⇤ is some revision operator.3

• B↵⇤ |= (↵) = 1.

• If B[{(↵) = 1} is satisfiable, then B↵⇤ ⌘ B[{(↵) = 1}.

• If (↵) = 1 is satisfiable, then B↵⇤ is also satisfiable.

• If ↵ ⌘ �, then B↵⇤ ⌘ B�

⇤ .• B↵

⇤ [ {(�) = 1} |= B↵^�⇤ .

• If B↵⇤ [ {(�) = 1} is satisfiable, then B↵^�

⇤ |= B↵⇤ [

{(�) = 1}.Testing the various revision operations against these postu-lates is left for a sequel paper.

An extended version of maximum entropy is minimumcross-entropy (MCE) (Kullback, 1968; Csiszar, 1975):Definition 7 (Minimum Cross-Entropy). The ‘directed di-vergence’ of distribution c from distribution b is defined as

R(c, b) :=X

w2W

c(w) lnc(w)

b(w).

R(c, b) is undefined when b(w) = 0 while c(w) > 0; whenc(w) = 0, R(c, b) = 0, because limx!0 ln(x) = 0. Givennew evidence � 2 Lprob , the distribution c satisfying � di-verging least from current belief state b is

argmin

c2⇧,c��R(c, b).

Definition 8 (MCI). Then MCE inference (denoted (MCI))is defined as

bMCI ↵ := argmin

b02⇧,b0�(↵)=1

R(b0, b).

In the following example, we interpret revision as MCEinference.

3In these postulates, it is sometimes necessary to write an ob-servation ↵ as a BB, i.e., as {(↵) = 1} – in the present framework,observations are regarded as certain.

138 revising incompletely specified convex probabilistic belief bases

Example 9. Suppose the vocabulary is {q, r} and B1 =

{(q) � 0.6}. Let ↵ be (q ^ ¬r) _ (¬q ^ r). Then

B1bnd = {{(11, 1.0), (10, 0.0), (01, 0.0), (00, 0.0)},

{(11, 0.0), (10, 1.0), (01, 0.0), (00, 0.0)},{(11, 0.6), (10, 0.0), (01, 0.4), (00, 0.0)},{(11, 0.6), (10, 0.0), (01, 0.0), (00, 0.4)},{(11, 0.0), (10, 0.6), (01, 0.4), (00, 0.0)},{(11, 0.0), (10, 0.6), (01, 0.0), (00, 0.4)}},

(⇧

B1bnd)

MCI↵ = {{(11, 0), (10, 0), (01, 1), (00, 0)},

{(11, 0), (10, 1), (01, 0), (00, 0)},{(11, 0), (10, 0.6), (01, 0.4), (00, 0)}} and

B↵1bnd = {(�11) 0, (�00) 0}.Note that if we let B0

= {((q ^ ¬r) _ (¬q ^ r)) = 1},then ⇧B0

= ⇧

B↵1bnd . ⇤

Recall from Example 6 that B0 included (q ^¬r) � 0.3. Hence, in this particular case, combiningthe boundary belief states approach with MCI results ina less informative revised belief base than when GI isused. The reason for the loss of information might bedue to R(·, {(11, 1.0), (10, 0.0), (01, 0.0), (00, 0.0)}) andR(·, {(11, 0.6), (10, 0.0), (01, 0.0), (00, 0.4)}) being unde-fined: Recall that R(c, b) is undefined when b(w) = 0 whilec(w) > 0. But then there is no belief state c for which c � ↵and R(·) is defined (with these two belief states as argu-ments). Hence, there are no revised counterparts of these twobelief states in (⇧

B1bnd)

MCI↵ . We would like to analyse MCI

more within this framework. In particular, in the future, wewould like to determine whether a statement like Theorem 1holds for MCI too.

In MCE inference, b-consistency of evidence � is definedas: There exists a belief state c such that c � � and c istotally continuous with respect to b (i.e., b(w) = 0 impliesc(w) = 0). MCE is undefined when the evidence is not b-consistent. This is analogous to Bayesian conditioning be-ing undefined for b(↵) = 0. Obviously, this is a limitationof MCE because some belief states may not be consideredas candidate revised belief states. Admittedly, we have notsearched the literature on this topic due to it being out of thepresent scope.

As far as we know, imaging for belief change has neverbeen applied to (conditional) probabilistic evidence. Due toissues with many revision methods required to be consistentwith prior beliefs, and imaging not having this limitation, itmight be worthwhile investigating.

The translation from the set of belief states back to a be-lief base is a mapping from every belief state to a probabil-ity formula. The size of the belief base is thus in the orderof |W perm |, where |W | is already exponential in the size ofP , the set of atoms. As we saw in several examples in thispaper, the new belief base often has a more concise equiva-lent counterpart. It would be useful to find a way to consis-tently determine more concise belief bases than our presentapproach does.

The computational complexity of the process to revise abelief base is at least exponential. This work focused on the-oretical issues. If the framework presented here is ever usedin practice, computations will have to be optimized.

The following example illustrates how one might dealwith strict inequalities.

Example 10. Suppose the vocabulary is {q, r} and B3 =

{(q) > 0.6}. Let ↵ be (q ^ ¬r) _ (¬q ^ r). Let ✏ be a realnumber which tends to 0. Then ⇧B3

bnd =

{{(11, 1.0), (10, 0.0), (01, 0.0), (00, 0.0)},{(11, 0.0), (10, 1.0), (01, 0.0), (00, 0.0)},{(11, 0.6 + ✏), (10, 0.0), (01, 0.4� ✏), (00, 0.0)},{(11, 0.6 + ✏), (10, 0.0), (01, 0.0), (00, 0.4� ✏)},{(11, 0.0), (10, 0.6 + ✏), (01, 0.4� ✏), (00, 0.0)},{(11, 0.0), (10, 0.6 + ✏), (01, 0.0), (00, 0.4� ✏)}},

(⇧

B3bnd)

GI↵ =

{{(11, 0.0), (10, 0.5), (01, 0.5), (00, 0.0)},{(11, 0.0), (10, 1.0), (01, 0.0), (00, 0.0)},{(11, 0.0), (10, 0.3 + ✏), (01, 0.7� ✏), (00, 0.0)},{(11, 0.0), (10, 0.6 + ✏), (01, 0.4� ✏), (00, 0.0)},{(11, 0.0), (10, 0.8 + ✏), (01, 0.2� ✏), (00, 0.0)} and

B↵3bnd = {(�11) 0, (�10) � 0.3 + ✏, (�01) 0.7 � ✏,

(�00) 0.0}.Note that if we let B0

= {((q ^ ¬r) _ (¬q ^ r)) = 1,(q ^ ¬r) > 0.3}, then ⇧B0

= ⇧

B↵3bnd . ⇤

It has been suggested by one of the reviewers that GI couldbe an affine map (i.t.o. geometry), thus allowing the proof ofTheorem 1 to refer to existing results in the study of affinemaps to significantly simplify the proof. The authors are notfamiliar with affine maps and thus leave investigation of thesuggestion to other researchers.

Related WorkVoorbraak (1999) proposed the partial probability theory(PTT), which allows probability assignments to be partiallydetermined, and where there is a distinction between prob-abilistic information based on (i) hard background evidenceand (ii) some assumptions. He does not explicitly define the“constraint language”, however, from his examples and dis-cussions, one can infer that he has something like the lan-guage LPTT in mind: it contains all formulae which canbe formed with sentences in our Lprob in combination withconnectives ¬,^ and _. A “belief state” in PTT is definedas the quadruple h⌦,B,A, Ci, where ⌦ is a sample space,B ⇢ LPTT is a sets of probability constraints, A ⇢ LPTT isa sets of assumptions and C ✓ W “represents specific infor-mation concerning the case at hand” (an observation or evi-dence).4 Our epistemic state can be expressed as a restrictedPTT “belief state” by letting ⌦ = W , B = B, A = ; and

4Voorbraak (1999)’s “belief state” would rather be called andepistemic state or knowledge structure in our language.

139

C = {w 2 W | w � ↵}, where B is a belief base and ↵ isan observation in our notation.

Voorbraak (1999) mentions that he will only considerconditioning where the evidence does not contradict the cur-rent beliefs. He defines the set of belief states correspondingto the conditionalized PPT “belief state” as {b(· | C) 2 ⇧ |b 2 ⇧B[A, b(C) > 0}. In our notation, this corresponds to{(b BC ↵) 2 ⇧ | b 2 ⇧B , b(↵) > 0}, where ↵ correspondsto C.

Voorbraak (1999) proposes constraining as an alternativeto conditioning: Let � 2 Lprob be a probability constraint.In our notation, constraining ⇧B on � produces ⇧B[{�}.

Note that expanding a belief set reduces the number ofmodels (worlds) and expanding a PPT ”belief state” withextra constraints also reduces the number of models (beliefstates / probability functions).

In the context of belief sets, it is possible to obtain anybelief state from the ignorant belief state by a series ofexpansions. In PPT, constraining, but not conditioning,has the analogous property. This is one of the main rea-sons we prefer to constraining and not conditioning tobe the probabilistic version of expansion. (Voorbraak,1999, p. 4)

But Voorbraak does not address the issue that C and � aredifferent kinds of observations, so constraining, as definedhere, cannot be an alternative to conditioning. C cannot beused directly for constraining and � cannot be used directlyfor conditioning.

W.l.o.g., we can assume C is represented by ↵. If we takeb GI ↵ to be an expansion operation whenever b(↵) > 0,then one might ask, Is it possible to obtain any belief baseB0 from the ignorant belief base B = ; by a series of expan-sions, using our approach? The answer is, No. For instance,there is no observation or series of observations which canchange B = {} into B0

= {(q) � 0.6}. But if we wereto allow sentences (constraints) in Lprob to be observations,then we could obtain any B0 from the ignorant B.

Grove and Halpern (1998) investigate what “update” (in-corporation of an observation with current beliefs, such thatthe observation does not contradict the beliefs) means in aframework where beliefs are represented by a set of beliefstates. They state that the main purpose of their paper isto illustrate how different the set-of-distributions frameworkcan be, “technically”, from the standard single-distributionframework. They propose six postulates characterizing whatproperties an update function should have. They say thatsome of the postulates are obvious, some arguable and oneprobably too strong. Out of seven (families of) update func-tions only the one based on conditioning (Updcond(·)) andthe one based on constraining (Updconstrain(·)) satisfy allsix postulates, where Updcond(⇧

B ,↵) := {(b BC ↵) 2 ⇧ |b 2 ⇧

B , b(↵) > 0} and where they interpret Voorbraak’s(1999) constraining as Updconstrain(⇧

B ,↵) := {b 2 ⇧B |b(↵) = 1}. Grove and Halpern (1998) do not investigate thecase when an observation must be incorporated while it is(possibly) inconsistent with the old beliefs (i.e., revision).

Kern-Isberner (2001) develops a new perspective of prob-abilistic belief change. Based on the ideas of Alchourron,

Gardenfors, and Makinson (1985) and Katsuno and Mendel-zon (1991) (KM), the operations of revision and update, re-spectively, are investigated within a probabilistic framework.She employs as basic knowledge structure a belief base(b,R), where b is a probability distribution (belief state) ofbackground knowledge and R is a set of probabilistic con-ditionals of the form A B[x] meaning ‘The probabilityof B, given A, is x. A universal inference operation – basedon the techniques of optimum entropy – is introduced as an“adequate and powerful method to realize probabilistic be-lief change”.

By having a belief state available in the belief base, min-imum cross-entropy can be used. The intention is then thatan agent with belief base (b, T ) should always reason w.r.t.belief state bT := argminc2⇧,c�T R(c, b). Kern-Isberner(2001) then defines the probabilistic belief revision of (b,R)

by evidence S as (b,R [ S). And the probabilistic beliefupdate of (b,R) by evidence S is defined as (bR,S).5 Shedistinguishes between revision as a knowledge adding pro-cess, and updating as a change-recording process. Kern-Isberner (2001) sets up comparisons of maximum cross-entropy belief change with AGM revision and KM update.Cases where, for update, new information R is inconsistentwith the prior distribution b, or, for revision, is inconsistentwith b or the context R, are not dealt with (Kern-Isberner,2001, p. 399, 400).

Having a belief state available for modification when newevidence is to be adopted is quite convenient. As Voorbraak(1999) argues, however, an agent’s ignorance can hardly berepresented in an epistemic state where a single belief statemust always be chosen.

The reader may also refer to a later paper (Kern-Isberner,2008) in which many of the results of the work just reviewedare generalized to belief bases of the form ( ,R), where denotes a general epistemic state. In that paper, she consid-ers two instantiations of , namely as a probability distribu-tion and as an ordinal conditional function (first introducedby Spohn (1988)).

Yue and Liu (2008) propose a probabilistic revision op-eration for imprecise probabilistic beliefs in the frameworkof Probabilistic Logic Programming (PLP). New evidencemay be a probabilistic (conditional) formula and needs notbe consistent with the original beliefs. Revision via imaging(e.g., GI) also overcomes this consistency issue.

Essentially, their probabilistic epistemic states are in-duced from a PLP program which is a set of formulae, eachformula having the form ( | �)[l, u], meaning that the prob-ability of the conditional ( | �) lies in the interval [l, u].

The operator they propose has the characteristic that if anepistemic state represents a single probability distribution,revising collapses to Jeffrey’s rule and Bayesian condition-ing.

They mention that it is required that the models (distribu-tions) of is a convex set. There might thus be an oppor-tunity to employ their revision operation on a representativeset of boundary distributions as proposed in this paper.

5This is a very simplified version of what she presents. Pleaserefer to the paper for details.

140 revising incompletely specified convex probabilistic belief bases

ConclusionIn this paper, we propose an approach how to generate a newprobabilistic belief base from an old one, given a new pieceof non-probabilistic information, where a belief base is a fi-nite set of sentences, each sentence stating the likelihood ofa proposition about the world. In this framework, an agent’sbelief base represents the set of belief states compatible withthe sentences in it. In this sense, the agent is able to repre-sent its knowledge and ignorance about the true state of theworld.

We used a version of the so-called imaging approach toimplement the revision operation.

Two methods were proposed: revising a finite set of‘boundary belief states’ and revising a least biased beliefstate. We focussed on the former and showed that the lat-ter gives different results.

There were two main contribution of this paper. The firstwas to prove that the set of belief states satisfying Bnew isexactly those belief states satisfying the original belief base,revised. The second was to uncover an interesting conflictin the results of the two belief base revision methods. It isworth further understanding the reasons behind such a dif-ference, as such an investigation could give more insightabout the mechanisms behind the two methods and indicatepossible pros and cons of each.

AcknowledgementsThe work of Giovanni Casini has been supported by theFonds National de la Recherche, Luxembourg, and cofundedby the Marie Curie Actions of the European Commission(FP7-COFUND) (AFR/9181001).

ReferencesAlchourron, C. E.; Gardenfors, P.; and Makinson, D. 1985.

On the logic of theory change: Partial meet contrac-tion and revision functions. Journal of Symbolic Logic50(2):510–530.

Boutilier, C. 1998. A unified model of qualitative beliefchange: a dynamical systems perspective. Artificial Intel-ligence 98(1–2):281–316.

Chhogyal, K.; Nayak, A.; Schwitter, R.; and Sattar, A.2014. Proceedings of the thirteenth pacific rim interna-tional conference on artificial intelligence (pricai 2014).In Pham, D., and Park, S., eds., Proc. of PRICAI 2014,volume 8862 of LNCS, 694–707. Springer-Verlag.

Cover, T., and Thomas, J. 1991. Elements of InformationTheory. New York: Wiley.

Csiszar, I. 1975. I-divergence geometry of probability distri-butions and minimization problems. Annals of Probability3:146–158.

Dubois, D., and Prade, H. 1993. Belief revision and up-dates in numerical formalisms: An overview, with newresults for the possibilistic framework. In Proceedings ofthe 13th International Joint Conference on Artifical Intel-ligence, volume 1 of IJCAI’93, 620–625. San Francisco,CA, USA: Morgan Kaufmann Publishers Inc.

Gardenfors, P. 1988. Knowledge in Flux: Modeling the Dy-namics of Epistemic States. Massachusetts/England: MITPress.

Goldszmidt, M.; Morris, P.; and Pearl, J. 1990. A max-imum entropy approach to nonmonotonic reasoning. InProceedings of the Eighth Natl. Conf. on Artificial Intelli-gence (AAAI-90), 646–652. AAAI Press.

Grove, A., and Halpern, J. 1998. Updating sets of proba-bilities. In Proceedings of the Fourteenth Conference onUncertainty in Artificial Intelligence, UAI’98, 173–182.San Francisco, CA, USA: Morgan Kaufmann PublishersInc.

Hunter, D. 1991. Maximum entropy updating and condition-alization. In Spohn, W.; Van Fraassen, B.; and Skyrms, B.,eds., Existence and Explanation, volume 49 of The Uni-versity of Western Ontario Series in Philosophy of Sci-ence. Springer Netherlands. 45–57.

Jaynes, E. 1978. Where do we stand on maximum entropy?In The Maximum Entropy Formalism. MIT Press. 15–118.

Katsuno, H., and Mendelzon, A. 1991. On the differencebetween updating a knowledge base and revising it. InProceedings of the Second International Conference onPrinciples of Knowledge Representation and Reasoning,387–394.

Kern-Isberner, G., and Rdder, W. 2004. Belief revision andinformation fusion on optimum entropy. InternationalJournal of Intelligent Systems 19(9):837–857.

Kern-Isberner, G. 1998. Characterizing the principle of min-imum cross-entropy within a conditional-logical frame-work. Artificial Intelligence 98(12):169 – 208.

Kern-Isberner, G. 2001. Revising and updating probabilis-tic beliefs. In Williams, M.-A., and Rott, H., eds., Fron-tiers in Belief Revision, volume 22 of Applied Logic Se-ries. Kluwer Academic Publishers, Springer Netherlands.393–408.

Kern-Isberner, G. 2008. Linking iterated belief changeoperations to nonmonotonic reasoning. In Proceedingsof the Eleventh International Conference on Principlesof Knowledge Representation and Reasoning, 166–176.Menlo Park, CA: AAAI Press.

Kullback, S. 1968. Information theory and statistics, vol-ume 1. New York: Dover, 2nd edition.

Lehmann, D.; Magidor, M.; and Schlechta, K. 2001. Dis-tance semantics for belief revision. Journal of SymbolocLogic 66(1):295–317.

Lewis, D. 1976. Probabilities of conditionals and condi-tional probabilities. Philosophical Review 85(3):297–315.

Paris, J., and Vencovsk, A. 1997. In defense of the maxi-mum entropy inference process. International Journal ofApproximate Reasoning 17(1):77–103.

Paris, J. 1994. The Uncertain Reasoner’s Companion: AMathematical Perspective. Cambridge: Cambridge Uni-versity Press.

141

Poole, D., and Mackworth, A. 2010. Artificial Intelligence:Foundations of Computational Agents. New York, USA:Cambridge University Press.

Rens, G., and Meyer, T. 2015. A new approach to proba-bilistic belief change. In Russell, I., and Eberle, W., eds.,Proceedings of the International Florida AI Research So-ciety Conference (FLAIRS), 582–587. AAAI Press.

Shore, J., and Johnson, R. 1980. Axiomatic derivation of theprinciple of maximum entropy and the principle of mini-mum cross-entropy. Information Theory, IEEE Transac-tions on 26(1):26–37.

Spohn, W. 1988. Ordinal conditional functions: A dynamictheory of epistemic states. In Harper, W., and Skyrms, B.,eds., Causation in Decision, Belief Change, and Statistics,volume 42 of The University of Western Ontario Series inPhilosophy of Science. Springer Netherlands. 105–134.

Voorbraak, F. 1999. Partial Probability: Theory and Appli-cations. In Proceedings of the First International Sympo-sium on Imprecise Probabilities and Their Applications,360–368. url: decsai.ugr.es/ smc/isipta99/proc/073.html.

Yue, A., and Liu, W. 2008. Revising imprecise probabilisticbeliefs in the framework of probabilistic logic program-ming. In Proceedings of the Twenty-third AAAI Conf. onArtificial Intelligence (AAAI-08), 590–596.

142 revising incompletely specified convex probabilistic belief bases

Forschungsberichte der Fakultät für Informatik

der Technischen Universität Dortmund

ISSN 0933-6192

Anforderungen an:Dekanat Informatik | TU Dortmund

D-44221 Dortmund