Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Models of Argument for Deliberative Dialoguein Complex Domains
Alice Toniolo
MSc, Computer Science Engineering, University of Padova, Italy, 2009
BSc, Computer Science Engineering, University of Padova, Italy, 2006
A dissertation submitted in partial fulfillment
of the requirements for the degree of
Doctor of Philosophyof the
University of Aberdeen.
Department of Computing Science
2013
Declaration
No portion of the work contained in this document has been submitted in support of an application
for a degree or qualification of this or any other university or other institution of learning. All
verbatim extracts have been distinguished by quotation marks, and all sources of information have
been specifically acknowledged.
Signed: Alice Toniolo
Date: April 30th, 2013
Abstract
In dynamic multiagent systems, self-motivated agents pursuing individual goals may interfere
with each other’s plans. Agents must, therefore, coordinate their plans to resolve dependen-
cies among them. This drives the need for agents to engage in dialogue to decide what to do
in collaboration. Agreeing what to do is a complex activity, however, when agents come to an
encounter with different objectives and norm expectations (i.e. societal norms that constrain ac-
ceptable behaviour). Argumentation-based models of dialogue support agents in deciding what to
do analysing pros/cons for decisions, and enable conflict resolution by revealing structured back-
ground information that facilitates the identification of acceptable solutions. Existing models of
deliberative dialogue, however, commonly assume that agents have a shared goal, and to date their
effectiveness has been shown only through the use of extended examples.
In this research, we propose a novel model of argumentation schemes to be integrated in
a dialogue for the identification of plan, goal and norm conflicts when agents have individual
but interdependent objectives. We empirically evaluate our model within a dynamic system to
establish how the information shared with argumentation schemes influence dialogue outcomes.
We show that by employing our model of arguments in dialogue, agents achieve more successful
agreements. The resolution of conflicts and identification of more feasible interdependent plans is
achieved through the sharing of focussed information driven by argumentation schemes. Agents
may also consider more important conflicts, or conflicts that cause higher loss of utility if unre-
solved. We explore the use of strategies for agents to select arguments that are more likely to
solve important conflicts. We show through an empirical evaluation that the most effective strat-
egy combines the drive for agents to search for additional conflicts and the need to solve the most
important ones already identified.
Acknowledgements
I would like to thank my supervisors Prof. Tim Norman and Prof. Katia Sycara for their encour-
agement, guidance, endless patience and for taking a risk on having me as a student. I would also
like to thank Prof. John Farrington and Dr. Nir Oren for their advice with my research.
This achievement would not have been possible without the help of my family, I would like to
thank my mum, Brunella, for giving me the kick to start and continue this journey, my dad, Renzo,
for bringing perspective into my life, and of course my brother, Elia, and my little sister, Angelica.
I would like to especially thank JP who has patiently stood by my side with endless smiles and
support.
A big thanks goes to those who have started with me on the quest for a T-word following the
wise words of Mr BossMan in 245, and to those who have continued with me on this quest in 917.
In both offices I met some extraordinary friends that will always be in my heart. Rajam and Geeth,
Chris and Daniele, I would have never arrived here without their help. And thanks to all past and
current inhabitants of 245 Murat, David, Mairi, Federico and the coffee maker, George. And a big
thanks goes to the inhabitants of the Cool Office in 917, Toni, Peter, Andy, Gina, Mukta, Ramona,
Fiona, Ruth, Danny, Danilo, Lizzy and Claire. And of course a thanks to everyone in dot.rural and
Meston.
I would like to thank the foreign legion for the amazing time spent in Aberdeen, thanks to
Sylvain, Sandy, Claudia and Alessio, Olivier and Nil, Fred and Anthi, Fabio and Silke, Alberto
and Anna, Luca and Lin, Ian, Alessio, and the many friends I met on the way. Finally I want to
thank my Italian friends that have always made the time to send their encouragement from the
other side of the channel.
My studies were founded jointly by the U.S. Army Research Laboratory, the dot.rural Digital
Economy Hub and the International Technology Alliance.
This research was sponsored by the U.S. Army Research Laboratory, under project W911-
NF-08-1-0447, “Proactive agent assistance for military missions”.
This research is supported by the award made by the RCUK Digital Economy programme to
the dot.rural Digital Economy Hub; award reference: EP/G066051/1.
This research was sponsored by the U.S. Army Research Laboratory and the U.K. Ministry
of Defence and was accomplished under Agreement Number W911NF-06-3-0001. The views and
conclusions contained in this document are those of the author(s) and should not be interpreted as
representing the official policies, either expressed or implied, of the U.S. Army Research Labora-
tory, the U.S. Government, the U.K. Ministry of Defence or the U.K. Government. The U.S. and
U.K. Governments are authorized to reproduce and distribute reprints for Government purposes
notwithstanding any copyright notation hereon.
Contents
1 Introduction 121.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.5 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.6 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2 Related work 232.1 Problem background: Planning and coordination . . . . . . . . . . . . . . . . . 23
2.1.1 Agents and planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.2 Coordination, collaboration and communication . . . . . . . . . . . . . . 26
2.1.3 Communication and dialogue . . . . . . . . . . . . . . . . . . . . . . . 30
2.2 Argumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.1 Argumentation in multiagent systems . . . . . . . . . . . . . . . . . . . 34
2.3 Argumentation for deliberative dialogue . . . . . . . . . . . . . . . . . . . . . . 41
2.4 Empirical evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3 Adopted dialogue system 583.1 Dialogue system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2 The topic language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2.1 Situation calculus and extensions . . . . . . . . . . . . . . . . . . . . . 62
3.2.2 A model of plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.2.3 Plan language for dialogues . . . . . . . . . . . . . . . . . . . . . . . . 73
3.2.4 Re-elaboration of plans . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.3 The procedural layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.3.1 General elements of a procedural layer . . . . . . . . . . . . . . . . . . 82
3.3.2 Protocol, move status and outcome . . . . . . . . . . . . . . . . . . . . . 84
3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4 Formal model of arguments 884.1 Argumentation framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.1.1 The model of dialogue: An overview . . . . . . . . . . . . . . . . . . . 89
CONTENTS 6
4.2 The concrete layer: A model of conflicts . . . . . . . . . . . . . . . . . . . . . 93
4.2.1 Concurrent actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.2.2 Plan constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.2.3 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.2.4 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.2.5 Justifications for adopting plan elements . . . . . . . . . . . . . . . . . . 104
4.3 The logical layer: A model of arguments . . . . . . . . . . . . . . . . . . . . . 105
4.3.1 Claim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.3.2 Arguments for concurrent actions . . . . . . . . . . . . . . . . . . . . . 107
4.3.3 Arguments for plan constraints . . . . . . . . . . . . . . . . . . . . . . . 109
4.3.4 Arguments for norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.3.5 Arguments for goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.4 The dialectical layer: Relationships between arguments . . . . . . . . . . . . . . 114
4.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5 Identifying effective plans through dialogue 1215.1 Successful dialogue and plan feasibility . . . . . . . . . . . . . . . . . . . . . . 121
5.2 Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
5.2.1 Choosing the next move . . . . . . . . . . . . . . . . . . . . . . . . . . 124
5.3 A system for evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.3.1 Initial plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5.3.2 Dynamic replanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.3.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.4 Empirical evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.4.1 Experiment design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.4.2 Statistical methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
5.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
6 Argument selection strategies 1516.1 Argumentation strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
6.2 Strategies driven by plan feasibility . . . . . . . . . . . . . . . . . . . . . . . . 152
6.2.1 Strategy Gprior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1546.2.2 Choosing the next move . . . . . . . . . . . . . . . . . . . . . . . . . . 155
6.2.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6.3 Empirical evaluation of strategies for more feasible plans . . . . . . . . . . . . . 158
6.3.1 Experiment design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
6.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
6.4 Strategies driven by conflict importance . . . . . . . . . . . . . . . . . . . . . . 165
6.4.1 Utility of the plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
CONTENTS 7
6.4.2 Selection strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
6.4.3 Choosing the next move . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.4.4 System for evaluation of strategies for more important conflicts . . . . . 173
6.4.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.5 Empirical evaluation of strategies for more important conflicts . . . . . . . . . . 175
6.5.1 Experiment design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
6.5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
6.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
7 Discussion and future work 1937.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
7.2 Limitations and future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
7.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
7.3.1 Digital Economy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
7.3.2 Collaborative Intelligence Analysis . . . . . . . . . . . . . . . . . . . . 203
8 Conclusions 206
Appendix A Statistical tests of Chapter 6 209
Glossary 218
Bibliography 224
List of Tables
2.1 Arguments for AATS in Figure 2.5. . . . . . . . . . . . . . . . . . . . . . . . . 48
2.2 Dialogue protocol defined in Kok et al. (2011). . . . . . . . . . . . . . . . . . . 50
2.3 Argumentation-based frameworks for deliberative dialogue. . . . . . . . . . . . 56
3.1 Speech acts and general protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.1 Formalisation of Arguments Argc. . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.2 Formalisation of Arguments Argp. . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.3 Formalisation of Arguments Argn. . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.4 Formalisation of Arguments Argg. . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.5 Example of dialogue. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.1 Speech acts of each protocol P . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255.2 Experimental parameters for protocols P . . . . . . . . . . . . . . . . . . . . . . 1335.3 Description of statistical parameters and plots. . . . . . . . . . . . . . . . . . . . 135
5.4 Coefficients δ2-δ3 for establishing the statistical difference between linear models. 1375.5 Statistical parameters for conflicts solved in protocols P . . . . . . . . . . . . . . 1405.6 Comparative tests for conflicts solved in protocols P . . . . . . . . . . . . . . . . 1405.7 Statistical parameters for conflicts removed in protocols P . . . . . . . . . . . . . 1435.8 Comparative tests for conflicts removed in protocols P . . . . . . . . . . . . . . 1435.9 Statistical parameters for conflicts dropped in protocols P . . . . . . . . . . . . . 1445.10 Comparative tests for conflicts dropped in protocols P . . . . . . . . . . . . . . . 1445.11 Statistical parameters for number of arguments in protocols P . . . . . . . . . . . 1465.12 Comparative tests for number of arguments in protocols P . . . . . . . . . . . . . 146
6.1 Experimental parameters for strategies G-a. . . . . . . . . . . . . . . . . . . . . 1596.2 Statistical parameters for conflicts solved in strategies G-a. . . . . . . . . . . . . 1606.3 Comparative tests for conflicts solved in strategies G-a. . . . . . . . . . . . . . . 1606.4 Statistical parameters for conflicts removed in strategies G-a. . . . . . . . . . . . 1616.5 Comparative tests for conflicts removed in strategies G-a. . . . . . . . . . . . . . 1616.6 Statistical parameters for number of arguments in strategies G-a. . . . . . . . . . 1636.7 Comparative tests for number of arguments in strategies G-a. . . . . . . . . . . . 1636.8 Experimental parameters for strategies G-b. . . . . . . . . . . . . . . . . . . . . 1776.9 Comparative tests for conflicts solved in strategies G-b. . . . . . . . . . . . . . . 1786.10 Comparative tests for value of conflicts solved in strategies G-b . . . . . . . . . . 179
LIST OF TABLES 9
6.11 Comparative tests for value of conflicts removed in strategies G-b. . . . . . . . . 1806.12 Comparative tests for number of arguments in strategies G-b. . . . . . . . . . . . 186
A.1 Comparative tests for conflicts solved in strategies G-b. . . . . . . . . . . . . . . 212A.2 Statistical parameters for conflicts solved in strategies G-b. . . . . . . . . . . . . 212A.3 Comparative tests for value of conflicts solved in strategies G-b. . . . . . . . . . 213A.4 Statistical parameters for value of conflicts solved in strategies G-b. . . . . . . . 213A.5 Comparative tests for value of conflicts removed in strategies G-b. . . . . . . . . 215A.6 Statistical parameters for value of conflicts removed in strategies G-b. . . . . . . 215A.7 Comparative tests for number of arguments in strategies G-b. . . . . . . . . . . . 216A.8 Statistical parameters for number of arguments in strategies G-b. . . . . . . . . . 216A.9 Comparative tests for G1drop,G3comb,G3nosol: Overall significance. . . . . . . . . . 217A.10 Comparative tests for G1drop,G3comb,G3nosol: Multiple comparisons & significance. 217
List of Figures
2.1 Agent model in Russell and Norvig (2003). . . . . . . . . . . . . . . . . . . . . 24
2.2 Toulmin’s argument schema (Toulmin, 1958, page 123). . . . . . . . . . . . . . 32
2.3 Planning problems type 1-2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.4 Planning problems type 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.5 Example of AATS (Black and Atkinson, 2009). . . . . . . . . . . . . . . . . . . 47
2.6 VAF for arguments in Table 2.1. . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.7 Dialectical tree from the example in Kok et al. (2011). . . . . . . . . . . . . . . 52
3.1 Dialogue system: Topic language and procedural layer. . . . . . . . . . . . . . . 59
3.2 Details of the topic language. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.3 Example of possible situation tree for delivering a package. . . . . . . . . . . . . 65
3.4 Action with duration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.5 Norm restrictions on the situations tree. . . . . . . . . . . . . . . . . . . . . . . 70
3.6 Example of an agent x’s plan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.7 Example of a temporally grounded plan. . . . . . . . . . . . . . . . . . . . . . . 74
3.8 Example of causal link. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.9 Example of plan extension for agent x. . . . . . . . . . . . . . . . . . . . . . . 81
3.10 Example of a dialectical tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.1 Dialogue system: Concrete, logical, and dialectical layers. . . . . . . . . . . . . 92
4.2 Concrete layer: Conflicts between plans. . . . . . . . . . . . . . . . . . . . . . . 94
4.3 Concurrency conflicts CFC.1 and CFC.2. . . . . . . . . . . . . . . . . . . . . . 97
4.4 Concurrency conflicts CFC.3 and CFC.4. . . . . . . . . . . . . . . . . . . . . . 98
4.5 Plan constraint conflicts CFP.1 and CFP.2. . . . . . . . . . . . . . . . . . . . . . 100
4.6 Norm conflicts CFN.1 and CFN.2. . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.7 Goal conflict CFG.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.8 Example of Argc: Argument for concurrency (ATK1.3). . . . . . . . . . . . . . . 109
4.9 Example of Argp: Argument for plan constraints (ATK2.2). . . . . . . . . . . . . 111
4.10 Example of Argn: Argument for norms (ATK3.2). . . . . . . . . . . . . . . . . . 112
4.11 Example of Argg: Argument for goals (ATK4.1). . . . . . . . . . . . . . . . . . 114
4.12 Defeating and supporting relationships. . . . . . . . . . . . . . . . . . . . . . . 116
5.1 System architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.2 Overall execution cycle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5.3 Example of plan conflicts and associated conflicting intervals. . . . . . . . . . . 131
LIST OF FIGURES 11
5.4 Percentage of dialogue with successful outcome as total conflicts increases. . . . 138
5.5 Conflicts solved as total number of conflicts increases in protocols P . . . . . . . 1405.6 Residuals plots for normality for conflicts solved in protocols P . . . . . . . . . . 1415.7 Conflicts removed as total number of conflicts increases in protocols P . . . . . . 1435.8 Conflicts dropped as total number of conflicts increases in protocols P . . . . . . 1445.9 Arguments exchanged as total number of conflicts increases in protocols P . . . . 1465.10 Number of arguments as the ratio of conflicts solved increases in protocols P . . 1475.11 Proportion of argument types used on average per run in protocols P . . . . . . . 147
6.1 Dialogue system: Heuristic layer. . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.2 Scenario for Example 6.2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6.3 Conflicts solved as total number of conflicts increases in strategies G-a. . . . . . 1606.4 Conflicts removed as total number of conflicts increases in strategies G-a. . . . . 1616.5 Number of arguments as the total number of conflicts increases in strategies G-a. 1636.6 Number of arguments as the ratio of conflicts solved increases in strategies G-a. 1646.7 Proportion of argument types used on average per run in strategies G-a. . . . . . 1646.8 Example of utility assignments in a individual plan. . . . . . . . . . . . . . . . 169
6.9 Replanning in strategies for more important conflicts. . . . . . . . . . . . . . . 173
6.10 Scenario for Example 6.4.5: Value of conflicts. . . . . . . . . . . . . . . . . . . 175
6.11 Conflicts solved as total number of conflicts increases in strategies G-b. . . . . . 1786.12 Value of conflicts solved as total value of conflicts increases in strategies G-b. . . 1796.13 Value of conflicts removed as total value of conflicts increases in strategies G-b. . 1806.14 Results for Hypothesis 1-b. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
6.15 Results for Hypothesis 2-b. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
6.16 Results for Hypothesis 3-b. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
6.17 Number of arguments as total value of conflicts increases in strategies G-b. . . . 1866.18 Ratio of value of conflicts solved as total value of conflicts increases in G-b. . . . 1886.19 Value of conflicts solved per argument as total value of conflicts increases in G-b. 188
A.1 Conflicts solved as total number of conflicts increases in strategies G-b. . . . . . 210A.2 Value of conflicts solved as total value of conflicts increases in strategies G-b. . . 210A.3 Value of conflicts removed as total value of conflicts increases in strategies G-b. . 211A.4 Number of arguments as total value of conflicts increases in strategies G-b. . . . 211A.5 Ratio of value of conflicts solved as total value of conflicts increases in G-b. . . . 214A.6 Value of conflicts solved per argument as total value of conflicts increases in G-b. 214
Chapter 1
Introduction
Collaborative decision making among agents in a team context is a complex activity. If agents
are motivated by individual objectives and norms, agreements must be made on the best course of
action to adopt in order to coordinate individual activities while complying with societal norms. In
this thesis, our core research question is “How can we develop effective mechanisms for software
agents to establish agreements on how to act together?”. We investigate argumentation-based
models of deliberative dialogue as a mechanism for discussing interdependent norm-driven plans
to achieve individual objectives. In addition, we look at methods for guiding the dialogue towards
the achievement of more satisfactory solutions for acting in collaboration. In this chapter we
discuss aims and motivations of our work, we present our contributions and we introduce the
structure of this thesis.
1.1 OverviewA software agent is an entity capable of reasoning about perceptions of the environment and acting
accordingly. Agents are motivated by goals and designed to autonomously decide how to achieve
these goals. A multiagent system is a distributed system where multiple agents interact within the
same environment considering how other agents affect their welfare (Wooldridge and Jennings,
1995). In complex situations, agents must build plans of action in order to satisfy their goal(s).
When working within the same environment, agents should coordinate their activities for prevent-
ing deadlocks in concurrent use of resources or inconsistencies in performing certain tasks. In
order to avoid harmful interaction, coordination may be ensured at runtime by pre-establishing
roles and commitments of agents, or by building a common plan of all the future actions and pos-
sible interactions before executing them (Jennings, 1996). Agents are internally motivated by the
need to satisfy their goals, however, when agents act in a society they may also adopt behaviour
that is socially acceptable in that society according to its customs. These external expectations
influence agents’ behaviour and the way agents interact in the environment. This behaviour is
regulated by norms that define what is acceptable and unacceptable. Agents may decide what
norms to comply to, and their behaviour must change accordingly. However, when a norm is vio-
lated the agents subject to that norm may be sanctioned by the society (Castelfranchi et al., 2000;
Vasconcelos et al., 2009).
Agents may act in a (partially) cooperative or competitive manner, motivated by a common
goal or when each individual attempts to satisfy its own goals, respectively (Sycara et al., 1996;
Jennings, 2000). Collaboration, either due to a common goal or dependencies between individual
1.1. OVERVIEW 13
goals, enhances the ability of agents to achieve complex objectives, combining individual capa-
bilities. In a fully collaborative team, agents interact by (direct or indirect) communication in
order to achieve a coherent team behaviour, and agents may build a plan in collaboration by merg-
ing partial plans or through one agent deciding on a common plan that each agent is expected
to adhere to (Sycara, 1989; Tambe, 1997). When competitive, decisions made by individuals,
although locally satisfactory, may interfere with those of others due to, for example, concurrent
resource use or different normative expectations about how things are done. However, individual
plans may still be highly interdependent due to the coexistence of the team in the environment.
Moreover, agents may assist each other in the performance of certain actions maintaining their
individual objectives, by acting on behalf of another agent or by establishing preconditions for
further actions. When acting within a group agents’ behaviour may be influenced by the norms
that regulate the interaction between individual members of the society. In particular, norms define
expected behaviour often dictated by the role of an agent in the society, and influence an agent’s
choice of how to achieve individual or common goals in order to avoid violations. Agents may
be designed to comply with norms, however, different agents may have different norm expecta-
tions that may only be discovered when agents come to an encounter in the society and develop
awareness of other agents’ intentions. In such interdependent contexts, harmful and incoherent
interactions are more likely to occur because agents are not aware of how other agents intend to
modify the environment and how other agents are expected to behave according to their norms. In
complex domains where agents have individual but interdependent objectives, existing approaches
have focussed on managing such interactions between agents by nominating a “wise” agent who
coordinates the activities, eliminates conflicts and communicates commands to the team on how
to perform individual plans (Nwana et al., 1996; Dimopoulos and Moraitis, 2006).
The use of a single “wise” agent, who determines the joint plan for all agents in the same
environment, has its disadvantages. Agents modify their individual plans to adhere to the overall
plan imposed on the society but they may not be permitted to achieve what they intend to, or take
a course of action that is more convenient for them. Simple inter-agent communication of plan
elements is essential to enforce coordination, but it may not be expressive enough to represent com-
plex behaviour such as motivations for choices, preferences, beliefs and so on. In order to support
more effective collaboration, the need to find agreements between agents on how to act in collabo-
ration arises. Existing work has shown that the use of argumentation models for practical reasoning
is a promising approach for decision making over the best collaborative action or set of actions in
a plan (Atkinson and Bench-Capon, 2007; Rahwan and Amgoud, 2007). Argumentation-based di-
alogue based upon such reasoning mechanisms facilitates agreements through the exchange of in-
formation about collaborative tasks, highlighting possible alternatives and conflicting goals (Black
and Atkinson, 2009; Kok et al., 2011).
In a broader perspective, argumentation theory has increasingly received attention in multia-
gent systems as a mechanism to represent autonomous reasoning under uncertain and incomplete
information (Dung, 1995; Amgoud, 2003; Bench-Capon, 2003; Prakken, 2006b; Oren et al., 2007;
Dung et al., 2009). In this context Dung’s seminal theory of abstract argumentation has been most
influential in today’s research on argumentation (Dung, 1995). Argumentation theory aims at
1.1. OVERVIEW 14
identifying sets of collectively believable arguments by providing methods for deriving the ac-
ceptability status of arguments. Arguments represent defeasible logical inferences; for example in
the context of practical reasoning assume that Billy intends to visit Venice and Jane suggests he
takes the train with argument Arg1, “There is a train from Milan to Venice, if Billy takes that train
he will get to Venice, thus Billy should take the train from Milan to Venice”. The fact that Billy
should take this train may be tentatively accepted unless other evidence dismissing this claim is
collected. Arguments are defeasible in the sense that the premises warranting the conclusion are
tentatively accepted when weighing existing evidence for or against it. However, in the light of
new received information the conclusions may be retracted if this invalidates the claim. Further-
more, from Dung’s theory, an argument is rationally believable if its supporting arguments are
defended against attacking arguments (Dung, 1995). For example, consider that Jane has asserted
argument Arg1 whereby Billy should take the train. An attack is an argument Arg2, “Venice is
built upon islands, thus there are no trains that go to Venice”. The action for Billy to take the
train cannot be rationally accepted since Arg2 attacks Arg1. However, if Arg2 is attacked by a new
argument Arg3, such as “Venice is connected by bridges to the mainland, thus it is accessible by
train”, claim Arg1, defended by Arg3, may then be reinstated.
Argumentation, used within dialogue, provides a means to state reasons for claims, in order
to justify actions or establish the truth of particular statements, criticise others’ opinions, per-
suade others or complement each other’s knowledge (Walton and Krabbe, 1995). In particular,
argumentation-based models of dialogue enable agents to effectively resolve conflicts of opinion
(Sycara, 1989; McBurney and Parsons, 2002; Rahwan et al., 2004; Atkinson et al., 2006; Maudet
et al., 2007). This has been shown in different types of dialogue; for example, in persuasion di-
alogue, using argumentation an agent is able to influence another agent’s intentions (Kraus et al.,
1998), and in negotiation, argumentation enables a more efficient allocation of tasks in a resource-
constrained environment (Karunatillake et al., 2009). In deliberation, the aim of the dialogue is to
decide what to do in collaboration, but conflicts of opinion may arise because agents have differ-
ent views on the course of action to adopt according to their preferences or commitments. Recent
developments in argumentation for deliberative dialogue have shown the benefits of modelling
collaboration among agents in the creation of consistent plan options with a shared goal using
different argumentation frameworks (Black and Atkinson, 2009; Kok et al., 2011; Ferrando et al.,
2011; Belesiotis et al., 2010b; Tang and Parsons, 2005; Medellin-Gasque et al., 2011). In this
context, a proponent states a goal and a plan option to achieve this goal. The other team members
engage in dialogue following a protocol to discuss in turn other alternatives and possible conflicts.
In contrast with persuasion dialogue, where agents attempt to establish a common position on the
truth of a particular statement, in deliberation agents must decide on what is the best option to
adopt. The decision is complex because agents choose actions independently according to their
preferences or commitments, and these acts affect the environment in which other agents operate,
leading to conflicts that are difficult to predict. For example, when Jane asserts Arg1 there are
various reasons that must be taken into consideration by another team member, Jack, that may
impede Billy from taking the train from Milan to Venice: Billy has another goal, going to Rome,
and there is no time to visit both cities; the fact that there are no trains to Venice stated in Arg2may be caused by the train company being on strike; or the timetable not being accurate; or the
1.1. OVERVIEW 15
trains may be diverted for maintenance of the railway, etc. In this situation, making a decision is a
difficult task because the best alternative must resolve inconsistencies and also respect the options
preferred by each individual.
Argumentation provides a natural way for representing and reasoning about positive and neg-
ative reasons for accepting or rejecting a plan option. Such a decision is made according to three
steps (Baroni et al., 2011): the generation of arguments and their relationships, which is based
upon a logical language to represent knowledge and a structure for arguments; the identification
of a set of arguments that are acceptable (extensions) given a criterion that defines the acceptability
status of arguments (semantics) by determining how to resolve conflicts between attacking argu-
ments; and the identification of acceptable conclusions in order to decide what to do (in practical
reasoning). Arguments can be represented with simple logical inferences, but in computational
systems the use of argumentation schemes has been introduced to formulate arguments since they
provide structures that can be applied to diverse information and capture domain-dependent de-
feasible inference structures (Walton, 1996; Atkinson and Bench-Capon, 2007; Oren et al., 2007).
In deliberation, the use of argumentation schemes is considered a promising approach for rep-
resenting and exchanging opinions about plan and action proposals (Black and Atkinson, 2009;
Medellin-Gasque et al., 2011).
An argumentation scheme is a structured way of making presumptive inferences, stating ex-
plicitly what the premises are and what conclusions can be drawn from these premises. Associated
with an argumentation scheme are critical questions (CQs) that can be used to challenge the valid-
ity of arguments. From our previous example, the structure of the logical inference of argument
Arg1, extracted and represented as a reasoning pattern, constitutes the following argumentation
scheme (derived from Walton, 1996):
- Premise 1 - G is a goal for person P,- Premise 2 - Doing action A is a means to realise goal G⇒ Conclusion - Therefore person P ought to do action A.
where the instantiation of the scheme for Arg1 is “Going to Venice is a goal for Billy, taking the
train from Milan to Venice is a means to realise going to Venice, therefore Billy ought to take the
train from Milan to Venice”. Critical questions may point towards conflicts between this inference
and other goals that Billy has, highlight alternatives on how to reach Venice, or challenge whether
it is possible to take this train. Furthermore, the same pattern, or scheme can be used to represent
an alternative option; for example Jane may suggest that Billy should take the bus instead of the
train: “Going to Venice is a goal for Billy, taking the bus from Milan to Venice is a means to
realise going to Venice, therefore Billy ought to take the bus from Milan to Venice”. In Walton
(1996) the critical question that leads to the generation of this argument is “Are there alternative
ways of realising the same goal?”.
In this introduction we have discussed the need of mechanisms to support agents for collab-
orative decision making when debating about plans to perform in order to coordinate individual
activities as well as obey societal norms. Argumentation-based dialogue, in particular based upon
argumentation schemes, is a promising approach to enable agents to establish agreements on how
to act in collaboration. Argumentation enables agents to explore individual reasons for the inten-
tion to adopt a particular action and to persuade other agents to agree on this action by changing
1.2. MOTIVATION 16
their view of the world with the exchange of structured information through arguments. Argumen-
tation schemes provide structure for those rationales and enable complex reasoning in collabora-
tive planning domains. In the next section, we discuss the need for a model of arguments that deals
with complex problems of collaboration where agents have individual but interdependent plans in
order to effectively establish agreements on how to act together.
1.2 MotivationIn this thesis we focus on a complex planning problem where heterogeneous agents must fulfil
individual objectives. In this context, agents need to collaborate to accomplish certain tasks, thus
they must form an agreement on the course of action to adopt (Kraus et al., 1998). This leads agents
to engage in dialogue focussed on resolving conflicting opinions on what to do. Planning indepen-
dently without knowing how other agents modify the environment may cause harmful interactions,
hence the need for agents to coordinate their activities. Furthermore, agents are motivated by so-
cietal norms to adopt certain actions or achieve states of the world in their plans and when asked
to adopt or drop actions in individual plans, violations of these norms may occur. The question
that we address in this research, therefore, focusses on how we can develop effective mechanisms
for agents to establish agreements on how to act together, where plans may interfere with those
of other agents being driven by individual objectives and norms. Often solutions proposed for
this problem involve agents sharing their entire plans to permit the coordination (Dimopoulos and
Moraitis, 2006) or the introduction of a coordinator to manage interdependencies (Castelfranchi,
1995). Sharing entire plans with other agents may not, however, be a desirable approach. There
are a number of reasons for this: different agents may not want others to know about their inter-
nal procedures; there may be policies that prohibit the sharing of plan elements; the sharing of
the entire plan is unfeasible due to time and resource constraints; planning is performed under
time-stressed conditions and it is not possible to discuss the overall plan, etc.
Different organisations are often required to collaborate in order to achieve objectives that
may not be reachable by single individuals. Although collaboration increases and enriches the
capabilities and competences of a single organisation, working in a team brings its own challenges
due to differences in knowledge, strategies, and resources. Collaboration in the sense of Kraus
et al. (1998) assume that these organisations are aware of internal procedures and requirements of
every other organisations, and when planning an intervention, the organisations have shared ob-
jectives and decide on a common plan of actions. This is not always a realistic scenario. One such
context that motivates our research is the example of a team formed by organisations with a com-
mon interest in order to respond to a natural disaster. We consider, in particular, a team formed by
a humanitarian organisation and a local authority acting in the same territory for organising rescue
operations when a disaster strikes. In this situation, although the overall objective is to deal with
the emergency and limit the impact of the disaster, the humanitarian organisation may have differ-
ent objectives than the local authority. For example, the local authority is responsible for assessing
the entity of the damage and identifying zones requiring major intervention, ensuring that hospi-
tals and rest centres are functioning and reachable, organising evacuation plans, maintaining the
transport infrastructure and so on. The humanitarian organisation may instead focus on people’s
needs, in delivering first aids and health services, collecting and delivering supplies, preventing
1.2. MOTIVATION 17
the rapid spread of diseases due to contaminated water supplies, poor hygiene conditions and so
on. These activities are performed within the same territory and may require collaboration, for ex-
ample in the deployment of field hospitals or in the organisation of search, rescue and evacuation
operations. In a realistic scenario, the need to perform activities in collaboration does not however
enforce the organisations to share their entire plans and requirements. Furthermore, each organ-
isation follows individual codes of practice where norms influence and guide the behaviour of
members of that organisation but also constrain the interaction between members across organisa-
tions. For example, a member of the humanitarian organisation may only intervene in a particular
zone if escorted by members of the local authority in order to avoid risks. However, incompatible
objectives, in addition to real-time constraints increase the efforts of collaboration. In the context
of a disaster-response the urgency of the intervention makes the planning activity more difficult
since there is not enough time to discuss detailed interdependent plans. This stressful condition
will unavoidably impact the quality of the resulting plans. This example will be discussed in detail
throughout the thesis.
In this research, we advocate the use of a model of arguments based upon argumentation
schemes to be employed in a deliberative dialogue for supporting agents to establish agreements
in highly interdependent plans maintaining appropriately limited collaboration. During the dia-
logue, agents must be able to identify conflicts among interdependent plans that cause harmful
interactions because actions are not coordinated, that produce undesirable behaviour in the society
by violating norms, or that lead to mutually exclusive plans because of conflicting goals. In ex-
isting work, planning issues such as norms and plan-constraints have been considered in separate
contexts for solving conflicts in practical reasoning (Atkinson and Bench-Capon, 2007; Rahwan
and Amgoud, 2007) and for norm adoption (Oren et al., 2008a), but we bring together these issues
within a single coherent model. Furthermore, existing research on argumentation-based delib-
erative dialogue has mainly focussed on collaboration among agents in the selection of the best
action to perform (Kok et al., 2011; Black and Atkinson, 2009) or the best common plan to per-
form (Ferrando et al., 2011; Belesiotis et al., 2010b; Tang and Parsons, 2005; Medellin-Gasque
et al., 2011) towards the achievement of a joint goal when agents have different beliefs or pref-
erences. In contrast, here we explore what is possible when agents elaborate individual plans
for achieving individual objectives where only some activities must be done in collaboration. In
particular, in this research, we refer to these as complex problems of collaboration characterised
by a wide amount of conflicts between interdependent plans caused by harmful interactions and
norm violations, that may hamper the execution of these plans within the same environment. Fur-
thermore, the complexity of these problems is also characterised by a higher number of actions
and objectives of the plans comparing to existing work where more simple plans are considered
(Karunatillake et al., 2009; Black and Bentley, 2012; Ferrando and Onaindia, 2012).
In order to describe the characteristics of an adequate model of arguments, we consider the
example previously introduced whereby Jane, here representing a tour organiser, asserts that Billy
should take the train from Milan to Venice. Assume that Billy wants to visit Murano island from
the train station in Venice, and that Jack, the owner of a private boat, collaborates with Jane. The
two agents, through a deliberative dialogue, decide on a plan for taking Billy from Milan to Venice
by train and, then, from the train station to the island by boat. Using existing models of arguments
1.2. MOTIVATION 18
(Black and Atkinson, 2009; Medellin-Gasque et al., 2011) the two agents working in collaboration
can argue about what other alternatives exist, what values the alternatives promote or demote, or
the need of arranging a plan that reflects the correct view of the world (i.e. from the train sta-
tion Billy must walk across the bridge to take the boat), etc. Jack and Jane, however, share the
same planning domain, and each knows how to perform all the possible actions in the domain.
The boat owner’s only interest is in scheduling this trip and does not have other constraints that
conflict with taking Billy to the island. In contrast, suppose that Jack runs his own business for
boat trips around Venice, scheduling trips according to times and places that are more likely to
suit customers. In this situation, Jane needs assistance from Jack but the goals of the agents dif-
fer. Jack may have different reasons for rejecting Jane’s request to take Billy to Murano island.
For example, at the time when Billy arrives in Venice, Jack’s boat heads to another location more
interesting for tourists, St. Mark Square, thus the two actions of taking Billy to Murano and tak-
ing tourists to St. Mark Square cannot be performed at the same time because they use the same
resource for incompatible actions. Furthermore, assume that the access to Murano island is pro-
hibited without a special licence and Jack’s boat is not licensed for these trips. These two conflicts
cannot be modelled by existing argumentation schemes, in fact, Black and Atkinson (2009) model
dialogues in which only one action may be discussed at a time. In other frameworks where a plan
is considered (Medellin-Gasque et al., 2011; Ferrando et al., 2011; Pardo et al., 2011), conflicts of
concurrency and causality between actions are considered, but since agents are fully collaborative,
those conflicts are solved at the outset when constructing the common plan, and hence not used in
dialogue.
The above example highlights the need to develop a model of arguments that supports the
representation of conflicts between interdependent plans including concurrency, causality and le-
gality of actions. Agents should also be able to justify the need to adopt certain actions according
to their plan rules, goals and norms. Our model should permit agents to exploit these conflicts,
understand the reasons that have caused them and facilitate the establishment of agreements. In
order for such a model to be useful in drawing conclusions on what to perform, we will discuss
its introduction into an existing well-formulated dialogue system. This follows the layered struc-
ture for dialogue systems proposed in Prakken and Sartor (2002), in which a logical layer defines
the structure of arguments, a dialectical layer represents the relationships between arguments, a
procedural layer defines a protocol for communication to regulate the dialogue between agents,
and a heuristic layer defines heuristics to persuade the opponent to concede a claim. Furthermore,
dialogue between agents is always characterised by a purpose (Walton, 1989). The purpose in our
dialogue is to identify an agreement over a course of action to adopt. A plan for an agent is feasible
if it is possible for that agent to enact this plan; when more than one agent act within the same
environment the feasibility of the plans refers to the ability for all agents to enact their individual
plans. However, there might be conflicts that may hamper their execution leading to threats to
the feasibility of the agents’ plans. During dialogue, agents argue about conflicts among interde-
pendent plans with the aim of solving those conflicts. Resolving conflicts leads to enhancing the
feasibility of the plans, by increasing the likelihood of agents to be able to enact the plans. The
problem of determining how effective a model of arguments for deliberative dialogue is, however,
1.2. MOTIVATION 19
must be addressed with respect to achieving the purpose of the dialogue. This can only be deter-
mined by an empirical evaluation of the model. Existing research has focussed on demonstrating
that argumentation-based models are useful, mainly through the use of extended examples, and
proving properties such as soundness and completeness. However, there is a major gap between
theory and practice in the development of argumentation systems (Karunatillake et al., 2009). In
fact, no rigorous assessment of how the information conveyed using arguments affects the reso-
lution of conflicts has been performed. Furthermore, as discussed in Black and Bentley (2012),
often there is no consideration of whether the application of an argumentation-based approach is
adequate for the problem intended to be solved. In this research we want to go further by empiri-
cally evaluating the model of arguments presented. Such evaluation involves numerous challenges
because very few studies have included rigorous evaluation (Black and Bentley, 2012; Kok et al.,
2012a; Ferrando and Onaindia, 2012) and there is no established method to evaluate the results.
We argue that, in order for argumentation-based models of dialogue to be effectively used in prac-
tical applications we must rigorously evaluate their adoption in computational systems. In order to
perform the evaluation, we must design a system where agents can dynamically resolve conflicts
according to the information acquired during dialogue. This would enable an evaluation of how
the use of argumentation schemes contributes to the resolution of conflicts. Our research aims
to bridge the gap between theory and practice that exists within argumentation-based models of
deliberative dialogue.
Returning to our example, consider the situation in which Jane asserts that Billy should be
taken to the island by Jack’s boat and Jack refuses. Jack may say that it is not possible because it
affects the feasibility of its plan: his boat cannot take the tourists to different places at the same
time, the other reason is due to a norm stating that the available boat is not licensed to travel to the
island. While both are plausible reasons for Jack to refuse Jane’s proposal, those reasons are of
different nature and levels of importance. Agents should identify which reason is more important
and which is the conflict that must be resolved with more urgency. In domains such as disaster-
response, decisions are time-stressed, and so agents must be able to rapidly recognise and focus
on the most important conflicts. Existing argumentation-based models of dialogue have proposed
methods for comparing arguments; e.g. actions chosen based on their utility in the plan (Rahwan
and Amgoud, 2007), costs and social implications in acquiring resources (Karunatillake et al.,
2009), or social values of the partners involved (Black and Atkinson, 2009; Medellin-Gasque et al.,
2011). Here we consider the selection of arguments to exchange during the dialogue according
to their importance; i.e. agents should perform a quantitative analysis of the costs and benefits of
actions, and penalties for norm violation. In existing work, however, since there is no evaluation of
the effectiveness of the criteria employed, there is also no assessment of whether the information
shared during the dialogue enables agents to effectively solve more important conflicts. There
is the need to understand what kind of dialogue strategies facilitate the resolution of conflicts of
varying importance through a rigorous empirical evaluation and explore under what conditions a
strategy is more effective.
1.3. PROBLEM STATEMENT 20
1.3 Problem statementPlanning in collaboration in a team context is a complex activity. A number of argumentation-
based models have been proposed to address the problem, the rationale being that the revelation of
background information and constraints can aid in the discovery and resolution of plan conflicts
towards the achievement of more feasible plans. In highly interdependent plans, however, tasks
to achieve individual objectives may conflict. Agents must, therefore, coordinate their activities
while complying as far as possible with societal norms. Given the challenges highlighted above,
in this research we argue that:
By employing a model of arguments based on argumentation schemes, focussed on
identifying plan and normative conflicts, in an appropriate deliberative dialogue,
agents motivated by individual goals are able to establish successful agreements on
how to act together by effectively identifying more feasible interdependent plans. By
strategically selecting arguments about conflicts caused by plan and norm constraints
to be shared during such a dialogue, agents are able to identify and resolve more im-
portant conflicts.
1.4 ContributionsThe contributions of the research presented in this thesis are as follows:
• Arguing about interdependent norm-driven plans (Chapter 4): an open question inargumentation-based deliberative dialogue is how agents may discuss interdependent norm-
driven plans with individual objectives. We propose a model of arguments based upon argu-
mentation schemes that allows agents to represent conflicts between plans caused by concur-
rent actions, causal plan-constraints, norms and goals. We believe that our model, employed
in an existing well-formed dialogue system (Kok et al., 2011), facilitates agreements about
interdependent plans by enriching the quality of the deliberative dialogue through the ex-
change of relevant arguments about plan commitments, norms and goals.
• Effective agreements and conflict resolution (Chapter 5): argumentation-based systems suf-fer from a major gap between theory and practice, whereby the properties and benefits of
argumentation are shown only through the discussion of extended examples where solu-
tions to conflicts are chosen among sets of predefined alternatives. To date, existing work
does not present a rigorous assessment of how the information shared with the argumen-
tation schemes affects the reasoning about conflicts caused by plan and norm constraints.
We present a system where agents construct alternatives that can solve existing conflicts
dynamically according to the information acquired during dialogue in order to assess the
tradeoff between information shared and the effective resolution of conflicts. We empiri-
cally evaluate this system and discuss the impact of the use of argumentation schemes in the
identification of feasible plans.
• Selection strategies for more effective agreements (Chapter 6): we explore strategic selec-tion of arguments to improve the effectiveness of the information shared during dialogue.
In existing argumentation frameworks, the selection of arguments to be exchanged is made
1.5. THESIS OUTLINE 21
using different criteria including values and utility of the plan. However, no evaluation has
been conducted in terms of how effective such criteria are in terms of conflict resolution, and
in particular of the most important conflicts. In this research, we define and evaluate selec-
tion heuristics for recommending choices among arguments that model the overall benefit
for the team. We discuss the impact of exploring information about new conflicts in reach-
ing better agreements. We exploit the relative importance of conflicts for aiding strategic
decisions in dialogue in order to understand what kind of dialogue strategies facilitate the
resolution of more important conflicts.
1.5 Thesis outlineThe remainder of this thesis is organised as follows:
• In Chapter 2, we review approaches in multiagent planning and coordination in order tosituate our research. We then focus on argumentation in multi-agent systems for supporting
reasoning and inter-agent communication. We discuss existing work on argumentation-
based models of practical reasoning for deliberative dialogue and their evaluation.
• In Chapter 3, we introduce the layered model of Prakken and Sartor (2002) extended torepresent our framework for deliberative dialogue. Following this, we describe the layers
that function as background to our work: the topic language where we discuss situation
calculus as the underlying planning language; and the procedural layer where we discuss
the elements of the dialogue system including the protocol for communication and rules to
determine the outcome of the dialogue.
• In Chapter 4, we present our model of: the concrete layer where conflicts between inter-dependent plans are defined; the logical layer where arguments based on argumentation
schemes are presented; and the dialectical layer in which relationships between arguments
are defined.
• In Chapter 5, we describe our empirical study and the experimental system. We discuss theconditions under which agents achieve successful agreements and enhance the feasibility
of the interdependent plans. We show that the use of our model of argumentation schemes
in deliberative dialogue is an effective mechanism to establish agreements on how to act
together.
• In Chapter 6, we explore the use at the heuristic layer of strategic selection of argumentsto improve the effectiveness of the information shared during dialogue. We focus on un-
derstanding what kind of strategies facilitates a more effective resolution of conflicts and
the resolution of more important conflicts. We empirically demonstrate under what condi-
tions those strategies facilitate the exploration of new conflicts and exploitation of known
conflicts.
• In Chapter 7, we reflect on the characteristics of our work, its limitations and possible im-provements. We then propose some applications where we expect our work to make a
positive contribution.
1.6. RELATED PUBLICATIONS 22
• In Chapter 8, we present our conclusions highlighting the main findings of this study.
1.6 Related publicationsParts of the research described in this thesis have been published in:
• Toniolo, A., Norman, T. J., and Sycara, K. (2012a). An empirical study of argumentationschemes for deliberative dialogue. In Proceedings of the Twentieth European Conference on
Artificial Intelligence, volume 242, pages 756–761 also presented at the Ninth International
Workshop on Argumentation in Multi-Agent Systems 2012.
• Toniolo, A., Norman, T. J., and Sycara, K. (2012b). On the benefits of argumentationschemes in deliberative dialogue (extended abstract). In Proceedings of the Eleventh In-
ternational Conference on Autonomous Agents and Multiagent Systems, pages 1409–1410.
• Toniolo, A. (2012). Agent support for collaboration in complex deliberative dialogues (ex-tended abstract). In Proceedings of the Doctoral Consortium of the Eleventh International
Conference on Autonomous Agents and Multiagent Systems.
• Toniolo, A., Norman, T. J., and Sycara, K. (2011a). Argumentation schemes for collabora-tive planning. In Kinny, D., Hsu, J. Y., Governatori, G., and Ghose, A. K., editors, Agents
in Principle, Agents in Practice, volume 7047 of Lecture Notes in Computer Science, pages
323–335. Springer Berlin Heidelberg.
• Toniolo, A., Norman, T. J., and Sycara, K. (2011b). Argumentation schemes for policy-driven planning. In Proceedings of the First International Workshop on the Theory and
Applications of Formal Argumentation.
Chapter 2
Related work
The aim of this research is to explore mechanisms to enable a heterogeneous team of agents to
cooperate within the same environment. We propose an argumentation-based model for delib-
erative dialogue based upon argumentation schemes for practical reasoning. In this chapter, we
discuss relevant approaches in the context of multiagent planning and argumentation that consti-
tute the background of our research. Section 2.1 aims to situate the problem addressed in our
research within existing work on planning and coordination. In particular, we discuss how agents
may plan and act in teams with common objectives or in different contexts where team members
have different objectives but their plans are interdependent. Since agents that act within a society
must comply with norms that promote ideal behaviour, we review related work for norm-driven
plans. Furthermore, we claim that employing methods for communication that support dialogue
regarding plans may facilitate the resolution of coordination and normative conflicts. Thus, we
discuss communication and dialogue in planning within multiagent systems. However, as intro-
duced in the previous chapter, simple dialogues may not be expressive enough to resolve complex
planning and coordination problems. Therefore, in Section 2.2 we give some background on ar-
gumentation theory, argumentation for practical reasoning and argumentation-based dialogue. In
Section 2.3 we discuss existing research on argumentation-based models of deliberative dialogue.
In contrast with the previous sections where a literature review is proposed, in Section 2.3 we crit-
ically compare existing work according to the type of problem addressed and the argumentation
framework proposed, discussing similarities and existing research gaps where our research con-
tributes to. In Section 2.4 we focus on the small number of existing empirical studies of the use of
argumentation-based systems within deliberative dialogue.
2.1 Problem background: Planning and coordinationThis section describes how the problem discussed in our research has been studied within mul-
tiagent planning and coordination systems, what solutions have been proposed and how commu-
nication can ease issues arising when agents work in the same environment. Planning is a major
topic in Artificial Intelligence. This survey is not an extensive review of planning, but explores the
research in this area that directly relates to our problem.
2.1.1 Agents and planningAn agent is an entity that perceives its environment through sensors and acts on that environment
through actuators as shown in Figure 2.1 (Russell and Norvig, 2003). An agent’s behaviour
is described by a function that maps a sequence of perceptions into actions to perform, and an
2.1. PROBLEM BACKGROUND: PLANNING AND COORDINATION 24
Environment
Sensors
Actuators Agent
Perceptions
Actions
Figure 2.1: Agent model in Russell and Norvig (2003).
internal program that implements this function. However, an agent may also present more human-
like characteristics such as knowledge, beliefs, intentions, obligations and emotions (Wooldridge
and Jennings, 1995).
A simple agent function is a mapping from the inputs and the outputs of an agent, but in more
complex domains an agent has to devise a plan of actions to achieve a goal. A classic planning
problem is defined by an initial state, a goal and a planning domain where rules for applying
actions to possible states of the world are specified. The plan is a solution to the planning problem
that consists of a sequence of actions that from the initial state of the world, leads to a state where
the goal is achieved. A planning algorithm is used to search in the space of possible states to reach
the goal state or to search the space of possible plans to identify a valid plan.
Planning languages have been specified to define planning problems. Here we present a brief
overview (Russell and Norvig, 2003; Nau et al., 2004). The most well-known language is the
STRIPS representation (Fikes and Nilsson, 1971) which is commonly used as language to repre-
sent planning domains. A more recent language, PDDL (Planning Domain Definition Language),
extending STRIPS, is now used as a standard computational planning language (Ghallab et al.,
1998). These languages are based upon the definition of actions as action schemas. Each action
schema consists of a name, a list of preconditions that are necessary conditions of the world for the
action to be performed and a set of effects that indicates how the world will change if the action is
performed when the preconditions hold. For example the action schema of moving a vehicle from
a location to another in PDDL, is:
(:action moveVehicle
:parameters (?from ?road ?to ?vehicle)
:precondition (and(route ?road ?from ?to)(vehicleAt ?vehicle ?from))
:effect (and (not(vehicleAt ?vehicle ?from))(vehicleAt ?vehicle ?to)))
Other languages have been introduced to express more complex planning domains. For exam-
ple, ADL (Pednault, 1994) (Action Description Language) extended PDDL to support more gen-
eral features, in particular negative and disjunctive preconditions (“A vehicle can only be moved if
it is not carrying anything or if the road is empty”), and quantified expressions that permit the use
of a less verbose representation (“All the vehicles in location A should be moved to location B”).
Planning languages including ADL and PDDL are derived from specifications of classical
planning problems, however other languages based upon a logical definition of actions have been
2.1. PROBLEM BACKGROUND: PLANNING AND COORDINATION 25
proposed. Those are derived from deductive planning problems, whereby the plan is obtained by
proving a theorem in a logical framework. As discussed in Nau et al. (2004) the main difference
in the representation of the two approaches is in the definition of actions: in classical planning
actions are based upon action schemas and the computation of the transition between states of
the world is made through adding and deleting effects of the action to the current state of the
world; in deductive planning, actions are logical formulas and the states of the world are com-
puted by applying inference rules defined by the underpinning logic. The languages for deductive
planning, such as Situation Calculus (McCarthy and Hayes, 1987), have the advantage of being
more expressive than the classical ones (Nau et al., 2004), for example in terms of representing
partial observability and extended goals; i.e. goals that include requirements on states traversed
to achieve the final state. In our research, a language is required for defining planning domains,
individual plans and refinement of those complex plans with constraints related to actions, states
of the world, goals, and norms. Existing research on situation calculus provides extensions that
permit the representation and the elaboration of these complex problems within a flexible dynamic
approach (Pinto and Reiter, 1995; Demolombe and Pozos-Parra, 2005; Reiter, 1996; Demolombe
and Pozos-Parra, 2005). For this purpose, in this research we adopt situation calculus, as a lan-
guage to represent plans using an axiomatisation of the world. The most commonly used definition
of situation calculus is the one introduced by Reiter (1991) that provides simple axioms to deal
with the frame problem. The frame problem is that of representing effects of an action without
explicitly representing all the invariants of the domain. A detailed description of situation calculus
is presented in Section 3.2.1.
These languages allow the representation of a large class of computational problems, how-
ever, for representing real world planning domains other constraints must be considered such as
access to resources and temporal constraints in terms of duration of actions, precedence between
actions and absolute time constraints. Allen (1984) discusses temporal planning problems, pre-
senting a taxonomy based on intervals of execution of actions and defining the order of durative
actions and possible interactions such as overlapping, etc. The major planning languages have
been extended to include temporal specification. For example, PDDL2.1 (Fox and Long, 2003)
allows both the representation of continuous and discrete actions, situation calculus has been ex-
tended for continuous actions in Reiter (1996) and for discrete actions in Pinto and Reiter (1993).
When the planning domain is specified, an algorithm for finding solutions to the planning
problem must be identified. There exists a variety of algorithms for this purpose that perform a
search on the space of states of the world or on the space of plans (Nau et al., 2004). The former
approach is called state-space planning, and consists of searching a path from the initial state to
the goal state in a space where nodes are states and arcs are transitions between states. In the plan-
space planning, the space consists of nodes of partially defined plans and arcs are refinements
operators; the search consists of finding a node that represents a plan that achieves the goal. While
in the former solution plans are represented as a linear sequence of actions, in the latter a partially
ordered plan is identified where there is a fixed partial order between actions and a valid plan is
a sequence of actions that satisfies this order (Penberthy and Weld, 1992). The partial order is
defined by actions that establish preconditions for other actions and thus have to precede those in
the plan. A casual link is specified in this case, stating that a precondition for an action is provided
2.1. PROBLEM BACKGROUND: PLANNING AND COORDINATION 26
by an effect of another action. Constraints are then added on the basis of threatening actions with
effects that negate causal links between actions in the plan. A more recent approach that follows
a representation of the search space similar to the state-space planning, is that of Hierarchical
Task Networks (HTN) (Erol et al., 1995). The objective in HTN is to accomplish an overall task
and the representation of plans follows common abstraction approaches. The primitive actions
are placed at a lower level of the plan hierarchy, while complex actions composed of (partially
ordered) primitive actions are placed in higher levels. The plan is formed by refining downwards
from the goal until a sequence of primitive actions is found.
Up to this point, we presented methods for an individual agent to achieve its goals, however
when there are multiple agents in the environment, the definition of the planning domain and the
planning algorithm must take into consideration that multiple actors may execute the plan. Fur-
thermore, the resolution of the problem may be decentralised when a team of agents collaborate in
the creation of a plan. A multiagent planning problem is defined as an extended planning problem,
where agents attempt to prepare plans to achieve their goal(s) as well as assisting other agents
(Russell and Norvig, 2003). Planning languages have been extended to include the possibility that
a team of agents will execute the plan; this is the case for example of a recent multiagent version of
PDDL (Kovacs, 2012), and of situation calculus for multiagent systems (Shapiro and Lespérance,
2001). In the next section we focus on problems arising when more than one agent coexist in the
environment.
2.1.2 Coordination, collaboration and communicationWorking in a team increases and enriches the capabilities of a single agent. However, when a group
of agents act in the same environment, collaborating or competing, rules on how to act together
and how to align individual behaviour to the customs of the society must be established. If these
constraints are not respected, actions of an individual may interfere with the goals of others.
In this survey, we first discuss coordination as the activity of managing interdependencies
when agents are provided with plans, but the plan of an individual may be inconsistent with that of
others and agents must coordinate the activities to prevent harmful interaction. We, then, describe
collaborative planning approaches in which plans are decided in collaboration with other agents.
Finally, we look at the problem where agents are heterogeneous and the goal of the interaction
between agents is to collaborate to create interdependent plans with individual objectives and
manage the dependencies in order to avoid inconsistent behaviour.
CoordinationCoordination is defined as
“The process by which an agent reasons about its local actions and the actions of
others to try and ensure that the community acts in a coherent manner.” (Jennings,
1996, page 187)
Some of the reasons that lead agents to coordinate their activities are (Jennings, 1996): when
agents’ individual actions are interdependent, and an action of an individual may perturb the ex-
ecution of other team members’ actions; or when information, resources or capabilities are dis-
tributed among the team and the access to these resources must be regulated. In this research, we
refer specifically to coordination as the runtime activity of reorganising actions in existing agents’
2.1. PROBLEM BACKGROUND: PLANNING AND COORDINATION 27
plans to be enacted in the same environment, in order to prevent harmful interactions or deadlocks
in using resources.
Coordination is ensured by controlling the access to scarce resources, where mechanisms
for accessing and sharing resources must be established, and by preventing negative interactions
(Nwana et al., 1996). Positive and negative interactions of agents are defined as (Dimopoulos and
Moraitis, 2006; Cox and Durfee, 2005; Thangarajah et al., 2003): a positive interaction is when an
agent contributes towards the achievement of another agent’s goal by performing certain actions;
e.g. agent x opens the door to agent y since its arms are busy etc; negative interactions are conflicts
caused by the lack of coordination on causal or concurrent actions. Threats to causal links cause
harmful behaviour, where those preconditions that are established by certain actions are negated by
others’ actions (Dimopoulos and Moraitis, 2006; Weld, 1994; McAllester and Rosenblitt, 1991).
Concurrent actions may also cause inconsistencies when, for example, an action has effects that
negate the effects of a concurrent action of another agents’ plan (Blum and Furst, 1997).
In organisational structures, coordination is enforced when designing the team activities.
Agents are organised in a hierarchical structure with specific identification of responsibilities and
roles of each individual, where each task is performed to achieve an overall goal. Agents may not
interact with each other but they have to report their work to higher levels of the hierarchy. When
interactions between agents are inevitable, dedicated agents are in charge of solving those coor-
dination conflicts by communicating meta-level information (e.g. when a task has started, ended,
etc.). In complex organisations, rules or normative constraints are also defined to describe the
desired behaviour of the agents in the society (Dignum and Dignum, 2001). Norms are often rep-
resented by obligations, prohibitions and permissions. In this case norms will be established and
enforced in the design of the system, where the designer, detecting behaviour that violates those
norms will revise the plan or initiate new plans if there are no solutions (Governatori et al., 2011).
In other approaches the coordination aims at preventing harmful interactions by solving inter-
dependencies at runtime. In this case, agents may share all their information through a blackboard
system, which consists of a shared data structure where agents upload knowledge and tasks, or
a “wise” agent may be identified to resolve inconsistencies and direct the activities of the team.
These and other techniques of coordination are defined in Jennings (1993) as employing a com-
mitments and conventions paradigm where commitments are intended as promises to undertake a
specific course of action and conventions maintain rules for adopting/dropping commitments.
Collaboration and teamworkIn the previous section, we assumed that plans were assigned to the agents and in order to enact
these plans agent must ensure coordination. However, in more dynamic domains, agents in a team
collaborate in the creation of a plan. Coordination of activities must still be considered. In fact, the
purpose of building a plan is to identify all the future actions and possible interactions in order to
avoid harmful behaviour (Nwana et al., 1996). However, when collaborating, agents are provided
with goals and methods for achieving those goals but they must identify and agree at runtime on
a plan to perform in collaboration. Collaborative agents act as team with mutual intentions and
awareness of each other and are able to modify their behaviour in order to conform to the decisions
made by the team (Jennings et al., 1998).
In this context, the theory of BDI agents has been developed, based on the analysis of the
2.1. PROBLEM BACKGROUND: PLANNING AND COORDINATION 28
rational behaviour of an agent in terms of three mental states: Beliefs, Desires, and Intention
(Bratman, 1987). Two well-known collaborative planning frameworks developed for BDI agents
are Joint Intentions (Cohen and Levesque, 1990) and Shared Plans (Grosz and Kraus, 1996). In
Joint Intentions, agents have a shared goal based upon shared beliefs and they commit to the
intention of performing an action in respect to the whole team and to each other; here the individual
intentions are joined together to achieve the overall intention of the team. Shared Plans instead is
based on recipes of how to perform actions and agents progressively share recipes and commit to
actions; here the intention of the team is more than joint individual intentions and it can only be
achieved by complementing each other’s knowledge. In BDI planning frameworks agents interact
by communicating where communication entails many dialogue features for sharing beliefs and
requesting tasks (Tambe, 1997).
In addition to complementing each other’s knowledge on how to solve complex problems,
agents may have different views of the world and they must be able to reason and agree on the cur-
rent state of world. Discrepancies in agents’ beliefs cause plan inconsistencies because agents have
different perspectives on what is possible to perform. This problem is often addressed through the
exchange of information for detecting and resolving conflicting beliefs (Chu-Carroll and Carberry,
2000). Belief revision may change the need for adopting certain goals. Agents must be able to
discard goals, and generate new goals according to their perception of the environment, their de-
sires to achieve a particular state of the world (Shapiro and Brewka, 2007) or in order to comply
with societal obligations such as in the BOID architecture (Broersen et al., 2002). Although the
planning activity is intended for agents to fully describe their behaviour before execution time, the
surrounding environment may change and impede the execution of certain parts of the plan. In
these non-deterministic planning environments, agents must sense the environment at execution
time in order to decide what to do (Tang et al., 2009; Scherl and Levesque, 2003; Golden and
Weld, 1996). Other languages have considered events that happen in the environment (Kowalski
and Sergot, 1989; Pinto, 1998; Morley et al., 1994). In such problems, communication may ease
the sharing of sensed information and support the identification of appropriate actions that lead to
the achievement of the objectives (Tang et al., 2009).
As previously introduced, when acting within a society, agents have to align their beliefs, and
decide in collaboration what to do but also they must adopt behaviour that is socially acceptable
according to norms that regulate this society (Kollingbaum and Norman, 2004; Vasconcelos et al.,
2009; Oren et al., 2011; Vasconcelos et al., 2007). Although the compliance with norms can be
enforced when designing the system, in more dynamic contexts, norms may change because of the
interaction of agents in the environment (Vasconcelos et al., 2007). Norms may have conditions
of activation and expiration that are time-dependent or caused by events within the environment
(Boman, 1999; Kollingbaum and Norman, 2004). Furthermore, when the society is self-regulated
agents may work together to design a set of norms that every member should adhere to (Kolling-
baum and Norman, 2002; López et al., 2006). Heterogeneous agents may adhere to diverse sets
of norms and may autonomously decide to adopt a norm existing in the society (Kollingbaum and
Norman, 2004). However, in order to maintain a consistent individual set of norms, when adopt-
ing a new norm an agent must resolve logical inconsistencies with the norms already adopted
(Vasconcelos et al., 2009; López et al., 2006). Logical inconsistencies are intended as actions or
2.1. PROBLEM BACKGROUND: PLANNING AND COORDINATION 29
states of the world that are simultaneously forbidden and obliged. Norm conflicts may be solved
by unifying norms by their scope of influence, detecting conflicts and solving them (Vasconcelos
et al., 2009) or deciding to consciously violate these norms (Castelfranchi et al., 2000; Oren et al.,
2008a). When agents have agreed upon a set of norms to comply with, their plans must be selected
in order to ensure that they are consistent with the norms otherwise they will incur in violations
that may be sanctioned (Kollingbaum and Norman, 2004). Different agents may, however, have
different norms and when they interact their plans may not be norm-compliant. Agents may also
decide whether it is worth complying with a norm depending on the cost of its violation and plan
according to the decision made (Oren et al., 2011).
Collaboration with interdependent plansIn the frameworks previously described, agents collaborate in the creation of a plan to be per-
formed as a team in the same environment. Agents have joint goals that must be achieved by
agreeing in collaboration on a common plan of actions. These teams work in full collaboration.
As argued in Castelfranchi (1995), there are, however, different forms of collaboration that corre-
spond to different degrees of interrelation between individual planning problems whereby agents
have different roles and they may not share a common goal. In this case, agents are driven by
the need to collaborate only for certain actions in their plans; for example when an agent must
establish the precondition for an action of another agent. In addition to building individual plans
and agreeing on what is best to perform to achieve individual objectives, agents must manage
interdependencies between individual plans, resolving conflicts for coordinating plans. In fact,
as argued in Nwana et al. (1996) collaboration does not necessarily imply that inconsistent be-
haviour is prevented, in fact collaboration may lead to harmful interactions when agents’ plans
are interdependent because agents’ plans are not fully shared and an agent may have modified the
environment without informing others.
In this type of scenario agents interact with other agents and reason about