23
JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.1(1-23) Science of Computer Programming ••• (••••) •••••• Contents lists available at ScienceDirect Science of Computer Programming www.elsevier.com/locate/scico An extensible argument-based ontology matching negotiation approach Paulo Maio , Nuno Silva GECAD – Knowledge Engineering and Decision Support Research Group, School of Engineering of Polytechnic of Porto, Rua Dr. Bernardino de Almeida 431, 4200-072 Porto, Portugal highlights A novel argument-based ontology matching negotiation approach is proposed. An explicit, formal, shared and extensible argumentation model is adopted. Experiments demonstrate the usefulness and pertinence of the approach. Easy to adapt and evolve the approach to support different scenarios’ requirements. A Software Development Framework for the adoption of the proposed approach. article info abstract Article history: Received 1 February 2013 Received in revised form 22 October 2013 Accepted 24 January 2014 Available online xxxx Keywords: Ontology matching Argumentation Negotiation Systems interoperability Computational systems operating in open, dynamic and decentralized environments are required to share data with previously unknown computational systems. Due to this ill specification and emergent operation the systems are required to share the data’s respective schemas and semantics so that the systems can correctly manipulate, understand and reason upon the shared data. The schemas and semantics are typically provided by ontologies using specific semantics provided by the ontology language. Because computational systems adopt different ontologies to describe their domain of discourse, a consistent and compatible communication relies on the ability to reconcile (in run-time) the vocabulary used in their ontologies. Since each computational system might have its own perspective about what are the best correspondences between the adopted ontologies, conflicts can arise. To address such conflicts, computational systems may engage in any kind of negotiation process that is able to lead them to a common and acceptable agreement. This paper proposes an argumentation-based approach where the computational entities describe their own arguments according to a commonly agreed argumentation meta- model. In order to support autonomy and conceptual differences, the community argumen- tation model can be individually extended yet maintaining computational effectiveness. Based on the formal specification, a software development framework is proposed. © 2014 Elsevier B.V. All rights reserved. 1. Introduction More and more computational systems (e.g. agents, web services) operating in open, dynamic and decentralized envi- ronments (e.g. semantic web, e-commerce, peer-to-peer, agent-based systems) require information sharing with previously unknown systems. Due to this ill specification and emergent operation, the computational systems are now required to * Corresponding author. Tel.: +351 22 834 05 00. E-mail addresses: [email protected] (P. Maio), [email protected] (N. Silva). 0167-6423/$ – see front matter © 2014 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.scico.2014.01.011

An extensible argument-based ontology matching negotiation approach

  • Upload
    nuno

  • View
    217

  • Download
    3

Embed Size (px)

Citation preview

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.1 (1-23)

Science of Computer Programming ••• (••••) •••–•••

Contents lists available at ScienceDirect

Science of Computer Programming

www.elsevier.com/locate/scico

An extensible argument-based ontology matching negotiationapproach

Paulo Maio ∗, Nuno Silva

GECAD – Knowledge Engineering and Decision Support Research Group, School of Engineering of Polytechnic of Porto, Rua Dr. Bernardino deAlmeida 431, 4200-072 Porto, Portugal

h i g h l i g h t s

• A novel argument-based ontology matching negotiation approach is proposed.• An explicit, formal, shared and extensible argumentation model is adopted.• Experiments demonstrate the usefulness and pertinence of the approach.• Easy to adapt and evolve the approach to support different scenarios’ requirements.• A Software Development Framework for the adoption of the proposed approach.

a r t i c l e i n f o a b s t r a c t

Article history:Received 1 February 2013Received in revised form 22 October 2013Accepted 24 January 2014Available online xxxx

Keywords:Ontology matchingArgumentationNegotiationSystems interoperability

Computational systems operating in open, dynamic and decentralized environmentsare required to share data with previously unknown computational systems. Due tothis ill specification and emergent operation the systems are required to share thedata’s respective schemas and semantics so that the systems can correctly manipulate,understand and reason upon the shared data. The schemas and semantics are typicallyprovided by ontologies using specific semantics provided by the ontology language.Because computational systems adopt different ontologies to describe their domain ofdiscourse, a consistent and compatible communication relies on the ability to reconcile(in run-time) the vocabulary used in their ontologies. Since each computational systemmight have its own perspective about what are the best correspondences between theadopted ontologies, conflicts can arise. To address such conflicts, computational systemsmay engage in any kind of negotiation process that is able to lead them to a common andacceptable agreement.This paper proposes an argumentation-based approach where the computational entitiesdescribe their own arguments according to a commonly agreed argumentation meta-model. In order to support autonomy and conceptual differences, the community argumen-tation model can be individually extended yet maintaining computational effectiveness.Based on the formal specification, a software development framework is proposed.

© 2014 Elsevier B.V. All rights reserved.

1. Introduction

More and more computational systems (e.g. agents, web services) operating in open, dynamic and decentralized envi-ronments (e.g. semantic web, e-commerce, peer-to-peer, agent-based systems) require information sharing with previouslyunknown systems. Due to this ill specification and emergent operation, the computational systems are now required to

* Corresponding author. Tel.: +351 22 834 05 00.E-mail addresses: [email protected] (P. Maio), [email protected] (N. Silva).

0167-6423/$ – see front matter © 2014 Elsevier B.V. All rights reserved.http://dx.doi.org/10.1016/j.scico.2014.01.011

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.2 (1-23)

2 P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–•••

share the data’s respective schemas and semantics, so that the systems can correctly manipulate, understand and reasonupon the shared data. The schemas and semantics are typically provided by ontologies using specific semantics providedby the ontology language. Nevertheless, computational systems maintain their autonomy and conceptual specificities, lead-ing to different ontologies and thus preventing the direct information sharing. Accordingly, a successful systems interactionrelies on the ability to reconcile their ontologies in run-time. In literature the ontology reconciliation problem is usuallyreferred to as Ontology Matching [1]. A reconciliation process consists of establishing a set of correspondences (referred toas alignment) between the system’s ontologies, which are further exploited to interpret or translate exchanged messagesand their content. Therefore, systems need to autonomously decide on each and all correspondences between the ontolo-gies they adopt in a conversation/interaction. For that purpose, a common approach found in literature consists in providingan ontology matching service such that the interacting computational systems agree (implicitly or explicitly) on using thatservice and, therefore, an alignment is requested as needed. However, ontology matching is a burdensome and error-proneprocess due to different factors. Firstly, because of the different applied semantics of the ontology languages and modelingapproaches. Secondly, because of the conceptual interpretation of the linguistic dimension of the ontology, which typicallygrounds the ontology to the domain of knowledge, but unfortunately is a source for multiple interpretations and thereforefor matching ambiguities. Consequently, the ontology matching process can lead to different and contradictory results (i.e.alignments) depending on the adopted matching approaches. Thus, considering that distinct computational systems mayhave different needs and objectives and, therefore, different preferences concerning the matching process, computationalsystems may be able to exploit the matching services they find more convenient instead of relying on a common matchingservice. For example, a computational system may prefer alignments having a high recall in disfavor of precision, whilethe other one may prefer precision instead of recall. In scenarios like the one described above, i.e. where each interact-ing computational system may adopt its most suitable matching service, it is necessary to provide a mechanism enablingthose systems to avoid and/or resolve possible alignment conflicts. In that sense, state-of-the-art literature refers to twonegotiation-based approaches: relaxation-based [2] and argument-based approaches [3,4].

This paper proposes a novel argument-based approach where arguments are described according to a state-of-the-artargumentation meta-model that captures general argumentation semantics. Moreover, the adopted meta-model is firstinstantiated by the negotiating community into a community argumentation model capturing the commonly agreed ar-guments (types or schemes) regarding the domain application. Further, in order to support autonomy and conceptualdifferences between individual systems, the community argumentation model can be individually extended, yet maintainingcomputational effectiveness.

Based on the formal specification (Sections 3 and 4), a software development framework is proposed and its architectureand design are discussed (Section 5). Examples and experiments adopting the proposals are finally presented (Section 6).Yet, in order to introduce the reader to important concepts and terminology, the next section revises important backgroundknowledge.

2. Background knowledge

First, this section concisely surveys the ontology matching domain. Further, it defines the ontology matching negotiationproblem and briefly describes current state-of-the-art approaches.

2.1. Ontology matching

Ontology matching is seen as the process of discovering, (semi-) automatically, the correspondences between semanti-cally related entities of two different but overlapping ontologies. Thus, as stated in [1], the matching process is formallydefined as a function f : (O 1, O 2, p, res, A) → A′ which, from a pair of ontologies to match O 1 and O 2, a set of parametersp, a set of oracles and resources res and an input alignment A, it returns an alignment A′ between the matched ontolo-gies. Ontologies O 1 and O 2 are often denominated as source and target ontologies, respectively. An alignment is a set ofcorrespondences expressed according to:

• Two entity languages Q L1 and Q L2 associated with the ontology languages L1 and L2 of matching ontologies (respec-tively) defining the matchable entities (e.g. classes, object properties, data properties, individuals);

• A set of relations R that is used to express the relation held between the entities (e.g. equivalence, subsumption,disjoint, concatenation, split);

• A confidence structure φ that is used to assign a degree of confidence in a correspondence. It has a greatest element �and a smallest element ⊥. The most common structure is the real numbers in the interval [0,1], where 0 representsthe lowest confidence and 1 represents the highest confidence.

Hence, a correspondence (or a match) is a 4-tuple c = (e, e′, r,n) where e ∈ Q L1 (O 1) and e′ ∈ Q L2 (O 1) are the entitiesbetween which a relation r ∈ R is asserted and n ∈ φ is the degree of confidence in the correspondence.

Over recent years, research initiatives in ontology matching have developed many systems (e.g. [5]) that rely on the com-bination of several basic algorithms yielding different and complementary competencies, to achieve better results. A basicalgorithm generates correspondences based on a single matching criterion [6]. These algorithms can be multiply classified

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.3 (1-23)

P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–••• 3

Fig. 1. An overview of the ontology matching negotiation process.

as proposed in [1,7] (e.g. terminological, structural, semantic). Moreover, systems make use of a variety of functions suchas:

• Aggregation functions whose purpose is to aggregate two or more sets of correspondences into a single one (e.g. min,max, linear average);

• Alignment Extraction functions whose purpose is to select from a set of correspondences those that will be part of theresulting alignment. The selection method may rely on the simplest methods such as the ones based on threshold-values(summarized in [1]) or more complex methods based on, for example, local and global optimizations (e.g. [8,9]).

The selection of the most suitable algorithms/system is still an open issue as they should not be chosen exclusivelywith respect to the given data but also adapted to the problem that is to be solved [1]. However, this question has alreadybeen dealt with in [10–12]. Despite all the existing (conceptual and practical) differences between matching systems andalgorithms, we will refer to both as matchers as all of them have a set of (candidate) correspondences as output.

2.2. Ontology matching negotiation

Generically, ontology matching negotiation (OMN) approaches take into consideration that negotiation occurs betweentwo honest and co-operative computational systems whose purpose is to agree on an alignment between their ontologiesthat satisfies and ensures confidence for the business interaction process. Moreover, it is assumed that each computationalsystem is capable of devising an alignment by itself or, alternatively, in collaboration with other systems not participating innegotiation. In this respect, it is important to bear in mind that each business system willing to interact may delegate thenegotiation task to a third-party entity acting on its behalf. The object of negotiation is the alignment content to establishbetween the systems’ ontologies. Therefore, systems negotiate about the inclusion or exclusion of each correspondencesuggested by one of them into the agreed alignment. The value that each system associates to correspondences is highlysubjective and depends on several factors such as (i) the pertinence of the correspondence with respect to the businessinteroperability and (ii) dependencies between other correspondences (e.g. some correspondences may imply or depend onother correspondences in a valid alignment).

Fig. 1 graphically depicts an overview of the ontology matching negotiation process.In literature, considering the completely automatic negotiation processes, i.e. where there is no user intervention, one

can find two distinct categories of approaches applied to this problem: (i) the ones based on relaxation mechanisms (e.g.[2]) and (ii) the argument-based approaches (e.g. [3,4]).

Concerning the former approach, they rely on a set of utility functions enabling a system to (re)classify each correspon-dence as accepted, negotiable or rejected. Its major drawback is the difficulty defining proper utility functions.

Concerning the latter approach, they instantiate the Value-based Argumentation Framework (VAF) [13], which capturesexisting arguments and attack relations between arguments. Each argument promotes a value that is further used to de-termine if an attack succeeds or not, based on a preferred ordered list of values. Because arguments are generated fromcorrespondences provided by matchers, possible argument values have been restricted to the five categories of matchersproposed in [1]:

• Terminological (T): are those that compare the names, labels and comments that are related to the ontological entities;• Internal Structural (IS): are those that exploit the internal characteristics of entities such as the domain and range of

their properties, the cardinality of attributes or even transitivity and/or symmetry assertions;

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.4 (1-23)

4 P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–•••

Fig. 2. The three TLAF/EAF modeling layers as captured by the respective OWL ontology.

• External Structural (ES): are those that exploit the (external) relations that an entity has with the other entities of theontology such as super-entity, sub-entity or sibling;

• Semantic (S): are those that utilize theoretical models to determine whether there is a correspondence or not betweentwo entities;

• Extensional (E): are those that compare the set of instances of entities being evaluated.

As that, despite it being simple and quite effective when being adopted by systems, it has some important limitationsnamely regarding the characteristics of autonomy and rationality [14] that are typical in systems dealing with the matchingproblem (e.g. agents) and the incapacity to take into consideration the (positive or negative) effect of accepting or rejectinga correspondence on the acceptability of other correspondences under discussion.

To overcome these and other limitations (e.g. lack of quantitative or opinion factors) of current argument-based ap-proaches, it is envisaged as suitable to follow a different line of research. In this sense, it is proposed a novel approachwhere the negotiating systems adopt the generic and domain-independent Argument-based Negotiation Process (ANP) pre-sented in [15]. The next section briefly describes the Extensible Argumentation Framework (EAF) [15–17] on which the ANPrelies on. The application of both (ANP and EAF) to the ontology matching negotiation context is novel and goes beyond thestate-of-the-art.

3. Extensible Argumentation Framework

The Extensible Argumentation Framework (EAF) [15–17] is based and extends the Three-Layer Argumentation Framework(TLAF) [18], which comprehends three modeling layers described in Section 3.1. Further, since arguments may be more orless persuasive and their persuasiveness may vary according to their audience, the arguments acceptability is addressed (cf.Section 3.2).

3.1. Conceptual layering modeling

Unlike abstract argumentation frameworks such as AF [19], BAF [20] and VAF [13], TLAF adopts a general and intuitiveargument structure and a conceptual layer for the specification of the semantics of argumentation data applied in a specificdomain of application. Therefore, despite being less abstract than AF, BAF and VAF, TLAF it remains domain independent.While the Meta-Model Layer and the Instance Layer of the adopted argumentation framework roughly correspond to the(meta-) model layer and the instance layer of abstract argumentation frameworks, the Model Layer does not have anycorrespondence in the surveyed abstract argumentation frameworks (illustrated in Fig. 2).

The meta-model layer defines the core argumentation concepts and relations holding between them. It adopts and ex-tends the minimal definition presented by Walton in [21] where “an argument is a set of statements (propositions), made upof three parts, a conclusion, a set of premises, and an inference from premises to the conclusion”. For that, the meta-modellayer defines the notion of Argument, Statement and Reasoning Mechanism, and a set of relations between these concepts. Anargument applies a reasoning mechanism (such as rules, methods, or processes) to conclude a conclusion-statement froma set of premise-statements. An IntentionalArgument is the type of argument whose content corresponds to an intention

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.5 (1-23)

P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–••• 5

Fig. 3. Partial graphical representation of a Model Layer for the ontology matching domain.

[14,22]. Domain data and its meaning are captured by the notion of Statement. This mandatorily includes the domain in-tentions, but also the desires and beliefs. The distinction between arguments and statements allows the application of thesame domain data (i.e. statement) in and by different means to arguments. Also the same statement can be concluded bydifferent arguments, and serve as the premise of several arguments.

With respect to ontology matching negotiation, an intentional argument represents the will to include/exclude a corre-spondence in/from the agreed alignment while information used to support/attack that is represented by a non-intentionalargument.

The Model Layer captures the semantics of argumentation data (e.g. argument types/schemes) applied in a specificdomain of application (e.g. ontology matching, e-commerce, legal reasoning and decision making) and the relations existingbetween them. In that sense, the model layer is important for the purpose of enabling knowledge sharing and reuse betweencomputational systems. In this context, a model is a specification used for making model commitments. Practically, a modelcommitment is an agreement to use a vocabulary in a way that is consistent (but not complete) with respect to the theoryspecified by a model [23,24]. Systems then commit to models and models are designed so that the knowledge can be sharedamong these systems. Accordingly, the content of this layer directly depends on:

• The domain of application to be captured, and• The perception one (e.g. a community of systems or an individual system) has about that domain.

Due to this, we adopt the vocabulary of (i) argument (or statement)-instance as an instance of an (ii) argument (or state-ment)-type defined at the Model Layer. Similarly, we adopt the vocabulary of (i) relation between types, and (ii) relationshipbetween instances.

At the model layer, an argument-type (or argument scheme) is characterized by the statement-type it concludes, theapplied class of reasoning mechanism (e.g. Deductive, Inductive, Heuristic) and the set of affectation relations (R) it has.The R relation is a conceptual abstraction of the attack (Ratt) and support (Rsup) relationships. The purpose of R is todefine at the conceptual level that argument-instances of an argument-type may affect (either positively or negatively)instances of another argument-type. For example, according to the model layer of Fig. 2, (C, D) ∈ R means instances ofargument-type C may attack or may support instances of argument-type D depending on the instances content. On theother hand, if (X, Y ) /∈ R it means that instances of argument-type X cannot (in any circumstance) attack/support instancesof argument-type Y . Yet, the R relation is also used to determine the types of statements that are admissible as premises ofan argument-instance. So, an argument-instance of type X can only have as premises statements of type S iif S is concludedby an argument-type Y and Y affects X (i.e. (Y , X) ∈ R). For example, considering again the model layer of Fig. 2, instancesof argument-type D can only have as premises statements of type B because D is affected by argument-type C only.

Fig. 3 depicts an example of an argumentation model for the ontology matching domain without mentioning the reason-ing mechanisms.

The MatchArg is an intentional argument representing the intention to include a correspondence into the alignment,which is affected by three arguments used by the state-of-the-art approaches: Terminological, External Structural and Se-mantic (cf. Section 2.2). Each correspondence generated by an algorithm classified in one of those categories is seen as adirect reason for or against the intention of including that correspondence into the alignment.

The Instance Layer corresponds to the instantiation of a particular model layer for a given scenario. Here, anargument-instance applies a concrete reasoning mechanism to conclude a conclusion-statement-instance from a set ofpremise-statement-instances. The relation conflictWith is established between two statement-instances only. A statement-instance b1 is said to be in conflict with another statement-instance b2 when b1 states something that implies or suggeststhat b2 is not true or does not hold. The conflictWith relation is asymmetric (in Fig. 2 b2 conflicts with b1 too). It is worth

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.6 (1-23)

6 P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–•••

Fig. 4. Instantiation of a Model Layer for the ontology matching domain.

Fig. 5. The inferred support and attack relationships.

noticing that all instances existing in the instance layer must have an existing type in the model layer and according to thetype characterization.

Considering our application scenario, the instance layer captures the correspondences and the reasons for and againstthose correspondences that are being exchanged between computational systems. Fig. 4 depicts an instantiation example ofthe argumentation model presented above (cf. Fig. 3).

The support (Rsup) and attack (Ratt) relationships between argument-instances are automatically inferred by means offour rules:

• An argument-instance x supports another argument-instance y when the argument-type of x affects the argument-typeof y and either:◦ The conclusion of x is a premise of y (R1) or◦ Both argument-instances have the same conclusion (R2);

• An argument-instance x attacks another argument-instance y when the argument-type of x affects the argument-typeof y and either:◦ The conclusion of x is in conflict with any premise of y (R3) or◦ The conclusion of x is in conflict with the conclusion of y (R4).

Considering the instantiation depicted in Fig. 4, the inferred support and attack relationships between argument-instances are the ones illustrated in Fig. 5.

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.7 (1-23)

P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–••• 7

Fig. 6. Partial graphical representation of an extended Model Layer for the ontology matching domain.

It is worth noticing that computational systems adopting this argumentation framework may use arguments with twopurposes: (i) to represent and communicate intentions (i.e. intentional arguments) and (ii) to provide considerations (i.e. be-liefs, desires) for and against those intentions (i.e. non-intentional arguments). Thus, an intentional argument may be affectedby several non-intentional arguments. Additionally, to capture dependency between intentions, intentional arguments maybe also affected (directly or indirectly) by other intentional arguments. A defeasible argument is affected by other (sub-)arguments (i.e. the ones concluding its premises or the ones undermining those premises) while an indefeasible argumentcan only be affected by its negation since it cannot have premises. Given that, in a Model Layer, intentional argumentsshould be always defeasible. On the contrary, non-intentional arguments can be both defeasible and indefeasible.

EAF extends TLAF by providing the constructs and respective semantics for supporting modularization and extensibilityfeatures to TLAF. In that sense, any EAF model is a TLAF model but not the inverse. In the EAF Model Layer, arguments,statements and reasoning mechanisms can be structured through the H A , H S and H M relations respectively. These areacyclic transitive relations established between similar entity types (e.g. arguments), in the sense that in some specific con-text entities of type e1 are understood as entities of type e2. While these relations are vaguely similar to the specializationrelation (i.e. subclass/superclass between entities) it does not have the same semantics and it is constrained to 1–1 relation-ship (cf. [16]). An EAF model may reuse and extend the argumentation conceptualizations of several existing EAF models.Inclusion of an EAF into another EAF is governed by a set of modularization constraints ensuring that no information of in-cluded EAF is lost. Fig. 6 illustrates the usage of the EAF extensibility feature regarding the ontology matching domain. Themodel depicted in this figure (called EAF S1) extends the model previously depicted in Fig. 3 (called EAFC ) such that the newarguments and statements are colored white while the arguments and statements of the extended model are colored gray.

According to this example, the EAF semantics imply (for example) that any instance of SubEntitiesArg is understood andis translatable to an instance of ExtStructuralArg. In the argument exchange context, this feature is relevant consideringthat each computational system internally adopting a distinct EAF model (e.g. EAF S1) extended from a common/shared EAFmodel (e.g. EAFC ) may translate arguments represented in their internal model to the shared model and, therefore, enablingthe understanding of those arguments by the other computational systems (cf. Section 4.2.6 for details). Therefore, theEAF features allow the systems to conceptualize their private argumentation model, maintaining the compatibility and thesemantic understanding with the remaining community.

In light of EAF S1, one may say that argument-instances of type ExtStructuralArg are affected by argument-instances oftype SubEntitiesArg since their conclusions are seen as premises that lead to the conclusion about the relation betweenthe external structure of the two entities. On the other hand, an argument-instance a1 of type SubEntitiesArg is affectedby the intention of accepting/rejecting a correspondence (MatchArg) when the entities related on the MatchArg instance aresub-entities of the entities related by a1. Thus, MatchArg affects SubEntitiesArg. Generalizing, intentional arguments are beingused to support/attack other arguments.

3.2. Argument acceptability

In any computational system using argumentation, one of the most important processes is to determine the acceptabilityof argument-instances, i.e. to state which argument-instances hold (are undefeated) and which argument-instances do nothold (are defeated). Most of the argumentation systems (e.g. the Prakken version of ASPIC [25], MbA [4], FDO [3]) use theabstraction provided by the argumentation frameworks (e.g. AF [19], BAF [20], VAF [13]) to make logical inferences, i.e. toselect the conclusions of the associated sets of arguments. For that, an abstract argumentation semantics such as the onesdescribed in [26] is applied.

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.8 (1-23)

8 P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–•••

An application adopting the EAF is still able to exploit such techniques because an EAF instance-pool can be easilyrepresented in a more abstract formalism such as BAF [17]1 and as AF2 [20]. Yet, because EAF assumes that bipolarity isimportant for the application domain (e.g. ontology matching), argumentation systems may also opt to apply an argumentevaluation process that exploits the bipolarity such as the ones proposed in [28–31]. However, none of these processesare able to (i) deal with the cyclic relationships that may exist between argument-instances and (ii) take advantage ofthe EAF Model Layer. To overcome such limitations an EAF’s argument evaluation process was devised comprehending twocomplementary steps.

First step determines the strength of each argument-instance based on (i) the type and (ii) the strength value of theargument-instances supporting and attacking the argument-instance being evaluated. For that, and ground on the idea thatdifferent argument-types may demand different forms of evaluation, it is required that each argument-type has associatedan argument evaluation function ( f ), which is responsible for the strength evaluation of all argument-instance of thattype. For example, an argument-type applying a deductive reasoning method may be evaluated by a function ( f1) thatreturns a value stating that an argument-instance holds iif the argument-instance being evaluated is not attacked by anyother argument-instance, otherwise the function returns a value stating that the argument-instance does not hold. On theother hand, an argument-type applying a voting reasoning method may be evaluated by a function ( f2) that considers thedifference between the number of argument-instances attacking and supporting the argument-instance being evaluated tostate if the argument-instance holds or not. Yet, it is important to bear in mind that each audience (e.g. computationalsystem) has distinct preferences and, therefore, may evaluate the argument-strength through a distinct set of evaluationfunctions.

Additionally, due to possible cyclic relationships between argument-instances it is also required:

• An algorithm (alg) that iterates over the argument evaluation functions to (re)evaluate the argument-instances strengthuntil a defined criterion is reached; and

• A matrix of the argument-instances’ strength values (mapV ), where each column represents an argument-instanceand each line represents the strength of every argument-instance in a given iteration of the algorithm being used.Therefore, mapV i denotes the values of all argument-instances in the ith iteration, mapV a

i denotes the strength of anargument-instance a in the ith iteration. In particular, mapV a

0 denotes the initial strength of the argument-instance aand mapV a denotes the strength of an argument a in the last iteration (row) of the matrix.

Distinct argument evaluation functions may exploit differently the relationships between argument-instances and thestrength/information of those argument-instances. Despite those differences, it is necessary that the values returned byall functions follow a common semantic understood by alg. Thus, for the sake of simplicity, consider that an argumentevaluation function is defined as f : (AI,mapV i) → V , where AI is the set of all existing argument-instances and V is anordered set {Vmin, . . . ,Vm, . . . ,Vmax} with at least three possible values, such that:

• Vmin represents the minimal strength value,• Vmax represents the maximal strength value, and• Vm represents a value whose distance to Vmin and Vmax is the same.

Hence, the strength value of an argument-instance a ∈ AI in the iteration i evaluated by f , such that mapV ai =

f (a,mapV i−1), has the following semantics:

• mapV ai > Vm , means that the argument a holds and is therefore undefeated. In addition, if mapV a

i > mapV bi > Vm ,

this means that the confidence on considering argument a undefeated is greater than the confidence on consideringargument b undefeated;

• mapV ai < Vm , means that the argument a does not hold and is therefore defeated. In addition, if Vm > mapV a

i > mapV bi ,

it means that the confidence on considering argument b defeated is greater than the confidence on considering argu-ment a defeated;

• mapV ai = Vm , means that the argument a has an undefined status, i.e. it might be considered either as defeated or as

undefeated. This means that the positive force given by the support relationships and the negative force given by theattack relationships are equivalent.

The result of the execution of the algorithm alg is therefore the mapV matrix, populated with the arguments strengthvalues evaluated by the evaluation functions. This matrix is used as input information to the next step.

Second step consists in selecting a preferred extension which is a sub-set of argument-instances representing a consistentposition within the EAF instance-pool such that according to an audience is defensible against all attacks and cannot be

1 The EAF’s set of argument-instances and the derived argument-instances relationships (Rsup and Ratt ) correspond to the three elements constituting aBAF instance.

2 The EAF’s set of argument-instances and the derived argument-instances relationships (Rsup and Ratt ) is represented as an AF by first representing theEAF as a BAF and further representing the resulting BAF as an AF. The process to represent a BAF as AF is described in [27].

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.9 (1-23)

P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–••• 9

Fig. 7. Overview of the proposed argument-based negotiation approach.

further extended without introducing a conflict. For that, the selection process makes use of two empty sets of argument-instances (T and T ′) and runs as follows. For each argument-instance (say a) whose type is an intentional argument:

1. If the defeat status of a is undefeated then a is added to T and all argument-instances supporting it are added to T ′;2. If the defeat status of a is defeated then a is not added to T but all argument-instances attacking it are added to T ′;3. If the defeat status of a is undefined (mapV a = Vm) it means that multiple preferred extensions exist, resulting in the

execution, alternatively, of one of the above steps.

At the end, the preferred extension is obtained by the union of T and T ′(prefext = T ∪ T ′), such that the set T correspondsto the intentional preferred extension (iprefext) while the set T ′ corresponds to the belief preferred extension (bprefext).Thus, an EAF preferred extension is composed of the undefeated intentional arguments and all the non-intentional argu-ments that support (directly or indirectly) the undefeated intentional arguments. Again, notice that the undefined status ofargument-instances gives rise to multiple preferred extensions. Thus, one considers that (i) an argument is sceptical admis-sible if it belongs to any preferred extension and (ii) an argument is credulous admissible if it belongs to at least one preferredextension.

Given a preferred extension (prefext), the intentions and beliefs of a computational system correspond to the statement-instances concluded by the argument-instances of the preferred extension.

4. The proposed argument-based approach

This section presents the proposed argument-based approach which is inspired and relies on the general Argument-basedNegotiation Process (ANP) described in [15]. First, an overview of the overall approach is provided, followed by a detaileddescription of the argument-based negotiation process and its phases.

4.1. Overview

The proposed argument-based approach assumes the negotiation occurs in the scope of a given community. Fig. 7 graph-ically depicts an overview of the proposed argument-based negotiation approach for the ontology matching domain. Itexploits (and somehow mimics) the way humans argue with each other, in the sense that humans share a large commonknowledge/perception about a given domain (e.g. ontology matching) but each one has its own perception and rationalityover that domain. For that, it comprehends the notions of (private/public) argumentation model (AM), which is an artifactthat captures (partially or totally) the perception and rationality that one has about a specific domain regarding the argu-mentation process. Accordingly, an argumentation model conceptually defines the vocabulary used to form arguments, thearguments’ structure and even the way arguments affect (i.e. attack and support) each other.

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.10 (1-23)

10 P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–•••

Fig. 8. The argument-based negotiation process.

The community in which the negotiation occurs defines a set of rules by which all interactions are governed. In thatsense, the community is also responsible for defining a public argumentation model, which is a shared argumentationmodel capturing the minimal common understanding of argumentation over the domain problem being addressed (e.g.ontology matching) by the members of that community. Therefore, all members of that community are able to understandthe defined public argumentation model and reason on it.

Further, each system (member of the community) must be able to extend the public argumentation model so it betterfits its own needs and knowledge. As a result, the members freely specify their own private argumentation model. Contraryto a public argumentation model, a private argumentation model captures the individual understanding of argumentationthat a system has over the domain problem being addressed. It is worth noticing that the EAF model layer together with theextensibility and modularization features satisfies the above definitions of public/private argumentation model. Therefore,from now the ANP description adopts the EAF.

Because systems adopt their own private argumentation model, each system has the responsibility of searching, iden-tifying and selecting sources of information (e.g. matching algorithms) that can provide the most relevant and significantinformation needed to instantiate its private model. After the private model instantiation each system has a set of argu-ments that need to be evaluated in order to extract the system’s preferred extension. In this context, a preferred extensiondefines the correspondences that a given system wants to include in the alignment and a set of reasons supporting thosecorrespondences. Therefore, by exchanging the arguments of their preferred extensions, systems might be able to achieve anagreement, i.e. a consensus about which correspondences belong (or not) to the alignment to be established between theirontologies.

4.2. The negotiation process

The computational system’s internal phases and its external interaction are illustrated in Fig. 8 as defined by the adoptedgeneral Argument-based Negotiation Process (ANP) [15], which is an iterative and incremental process. A description of eachphase concerning the ontology matching negotiation domain is provided in the next sub-sections.

4.2.1. SetupIn the Setup phase a set of interactions between the systems participating in the negotiation occurs to define the context

and the parameters of the negotiation. In particular, it is during this phase that:

• Each system informs the opponent system of the subject ontology (i.e. the system’s ontology to align);• The systems identify and accept the public argumentation model (AMC ) provided by the community as the minimal

common understanding between them. As a consequence, the private argumentation model of each system is the sameas AMC or extends it such that AMC AMS ;

• A priori alignment properties (e.g. the alignment level and cardinality) can be established between the systems.

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.11 (1-23)

P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–••• 11

Complementary to negotiation parameters, each participant creates an instance-pool of its own argumentation model(IP(AMS )) that will capture the argumentation data of the ongoing negotiation.

In contrast to other phases, this phase occurs only once.

4.2.2. Data Acquisition phaseDuring the Data Acquisition phase the computational system collects data/information that constitutes the grounds to

generate arguments. The set of data/information collected by a negotiating participant is referred to as D S such that d ∈ D S

is a pair (G, c) where c is a correspondence and G is a univocal identification of the matcher from where c was collected.To collect this information, participants (i) exploit internal matching algorithms and/or (ii) interact with other systems

that are not directly participating in the negotiation process. This might be the case of specialized systems providingmatching services and ontology matching repositories (OMRs). Also, as a result of the upcoming phases, correspondencestemporarily agreed (but not settled as definitive) may be used to feed data-collecting mechanisms. This is especially relevantfor the systems wishing to apply matching algorithms (e.g. semantic algorithms) in which the receiving correspondencesplay the role of anchors or inductive facts.

4.2.3. Argumentation Model InstantiationIn the Argumentation Model Instantiation phase, the participant makes use of one or more data transformation processes

over the collected data to generate a set of arguments structured according to its argumentation model. In this context, it isimportant to bear in mind that EAF does not specify any structure for statements or reasoning mechanisms. Consequently,the responsibility to specify such entities is left to the application level.

Regarding the ontology matching application, the structure for statements and reasoning mechanisms as well as theargument instantiation process is defined as proposed in [17] and described in brief, next.

A statement-instance is a 3-tuple s = (G, c,pos) where c is a correspondence, G is a univocal matcher identification andpos ∈ {+,−} states the position of G about c, i.e. states if G is for (+) or against (−)c. On the other hand, an instance ofa reasoning method is a tuple rm = (Γ,desc) where Γ is a univocal identification of the algorithm used by the matcherand desc is a textual description of Γ . For the sake of simplicity and in order to be able to distinguish between differentmatchers using the same base algorithm Γ but with different configuration-parameters, G is the univocal identification ofthe algorithm Γ instance.

The position of a matcher G about a correspondence c = (e, e′, r,n) is determined based on the degree of confidence (n).In this sense, it is considered that G is:

• In favor (+) of c if its confidence value on c is equal or greater than a given threshold value (n � tr+);• Against (−)c if its confidence value on c is less than another threshold value (n < tr−);• Neither in favor nor against c if tr− � n < tr+ and therefore c may be ignored.

Collected data is transformed into argument-instances through an interpretation function (ψ ) that maps correspondencesto the system’s private argumentation model based on their content and provenance as follows. In this sense, an interpreta-tion function is defined as ψ : G ×c → S × M ×pos where G is a univocal identification of the generator of correspondence c,and S and M are a statement type and a reasoning mechanism of AMS , respectively, and pos is the value resulting from theinterpretation of the matcher’s position.

Details about how the interpretation function is further exploited to generate argument-instances are provided in [17].However, some interpretation functions’ examples are provided in Section 6.1.

4.2.4. Argument EvaluationDuring the Argument Evaluation phase, previously generated argument-instances are evaluated by the negotiation par-

ticipant in order to extract a preferred extension. In this context, a preferred extension includes two kinds of argument:

• Intentional arguments, which define the intentions of the agent with respect to the agreement, i.e. it defines the corre-spondences that a participant wants to include/exclude in/from the alignment;

• Non-intentional arguments, which represent a set of reasons supporting the intentions, i.e. supporting the inclusion/ex-clusion of correspondences in/from the alignment.

If the argument evaluation process extracts more than one preferred extension then it is necessary to select one. Theselection criterion has a special relevance during the negotiation process because it directly defines the system’s intentionsand the reasons behind those intentions. Given this, instead of a simple criterion, a more elaborate selection criterion maybe taken into consideration. For example, instead of the “selection of the preferred extension that is maximal with respectto set inclusion”, one may consider “the preferred extension that minimizes the changes in respect to the previous one”.

The argument evaluation process to be adopted in this phase was previously presented in Section 3.2.

4.2.5. Agreement AttemptThe Agreement Attempt phase consists of two steps.

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.12 (1-23)

12 P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–•••

In the first step, participants in the negotiation exchange the intentional arguments of their preferred extensions toperceive:

• Their convergences (AgreedArgs), i.e. the correspondences proposed/accepted by both participant. The set AgreedArgsrepresents a candidate alignment (or agreement);

• Their divergences (DisagreedArgs), i.e. the correspondences proposed/accepted by a single participant. The setDisagreedArgs represents the exiting conflicts between the participants.

In the second step, according to the content of AgreedArgs and DisagreedArgs participants must decide whether to:

• Settle the candidate alignment as definitive and, therefore, proceed to the Settlement phase;• Continue the negotiation, and therefore proceed to the Persuasion phase in order to try to resolve their conflicts;• Conclude the negotiation without an agreement.

4.2.6. PersuasionIn order to persuade its opponent to accept or to give up the disagreed correspondences, each system exchanges the set

of non-intentional arguments existing on its preferred extension supporting its position, and therefore attacking the othersystems’ divergent positions.

At the end of this phase each system has collected a new set of information (EDS ), corresponding to the receivedarguments presented by the other negotiating systems.

Furthermore, it is important to perceive the consequences of the systems’ making use of private arguments (the ones ex-isting only in the system private argumentation model). Therefore, for each argument exchanged between two participants,one of four possible scenarios occurs:

• The type of the argument-instance exists in the community’s argumentation model and:◦ The receiver system does not re-interpret the received argument-instance according to its private argumentation

model (P1);◦ The receiver system re-interprets the received argument-instance according to its private argumentation model (P2);

• The type of the argument-instance does not exist in the community’s argumentation model and:◦ The sender system makes use of H A , H S and H M relations to send the argument-instance as the most specific

community’s argument type (P3);◦ The sender system is not able to send the argument-instance according to the community’s argumentation model

(P4).

To exemplify each of these scenarios consider a negotiation scenario between two systems (S1 and S2). Yet, considerthat (i) the EAF model previously depicted Fig. 3 (say EAFC ) as the community argumentation model, (ii) system S1 uses asits private argumentation model the EAF model layer previously depicted in Fig. 6 (say EAF S1) which extends EAFC and (iii)the private argumentation model of system S2 is the one defined by the community, such that EAF S2 ≡ EAFC .

The scenario P1 corresponds to the simplest scenario where argument-instances are straightforwardly exchanged andsimilarly understood by both systems. For example, if an argument-instance of type TerminologicalArg is exchanged betweenS1 and S2, they will similarly understand it because none of them is able to reclassify the argument-instance to anothertype.

In the scenario P2, argument-instances are also straightforwardly exchanged, but the receiver system interprets theargument-instances differently than the sender system. This implies that the receiver system is able to re-classify theargument-instances to another type. For example, S2 sends an argument-instance of type ExtStructuralArg to S1, whichhas the ability to re-classify it to SubEntitiesArg based (i) on the content of the argument-instance, (ii) on its knowledge re-garding the argument instantiation process and (iii) on the H relations existing in its private argumentation model (EAF S1).

With respect to the scenario P3, the sender realizes that the receiver system is (probably) not able to understand theargument because it is not (fully or partially) represented according to the common argumentation model. For the argu-ments exchange purpose, the sender system reclassifies internally those argument-instances as the most specific commonargument type through the existing H A , H S and H M relations. This is the case of S1 with respect to argument-instances oftype SubEntitiesArg since it does not belong to EAFC which makes S2 unable to understand such argument-instances. In thecase of S1, the most specific common argument type of SubEntitiesArg is ExtStructureArg. Therefore, S2 will receive instancesof SubEntitiesArg as instances of ExtStructureArg.

In the scenario P4, the sender is not able to reclassify the argument-instances to the community’s model. In such cases,two mutual exclusive possibilities arise:

• Those argument-instances are not exchanged;• Those argument-instances are exchanged in a general way (e.g. classified as Argument only) expecting that the receiver

is able to understand them based on their content (similarly to what was described in P2).

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.13 (1-23)

P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–••• 13

Fig. 9. Software Development Framework package overview.

4.2.7. Argumentation Model RefinementThe Argumentation Model Refinement phase concerns the refinement of the community’s argumentation model (AMc)

according to the exchanged arguments and the private argumentation models (AMS ). Hence, it requires systems’ ability tolearn from interactions with other systems and from other systems’ knowledge.

Due to the envisaged difficulty of the related tasks, this phase is seen as optional and, therefore may be skipped. Thistask is out of scope of this paper.

4.2.8. Instance Pool UpdateIn the Instance Pool Update phase, the participant analyses, processes and possibly reclassifies the arguments received

during the Persuasion phase in light of its private argumentation model. As a result, the system adds new arguments and/orupdates existing arguments. Therefore, the previous preferred extension becomes invalid and is discarded. The added/up-dated arguments are taken into consideration by the participant in the next round of proposals. The negotiation processproceeds (again) to the Data Acquisition phase.

At this point, an iteration of the argumentation process is concluded. The process has as many iterations as are neededto reach an agreed alignment or, instead, until no more (new) arguments are generated by participants. Yet, it might be thecase that a maximum number of iterations is previously defined in the Setup phase. In the two latter cases, the negotiationmay end without an agreement, and therefore unsuccessfully.

4.2.9. SettlementThe goal of the Settlement phase is to transform the candidate agreement into a definitive agreement. In this respect,

this phase is seen as an initiator of a set of tasks that are dependent on the business interaction process that had previouslytaken the computational systems to the ontology matching negotiation process. Each negotiating participant makes use ofthe agreed alignment to develop the business interaction process.

5. Software Development Framework

This section describes the proposed Software Development Framework (SDF) that captures the previously describedargument-based ontology matching negotiation process. The SDF allows the easy and driven development of the negotiationprocess in scope of computational systems. The provided SDF is composed by six main packages (Fig. 9):

• ANP that captures the Argumentation Negotiation Process;• EAF that captures the concepts of the Extensible Argumentation Framework;• ANP4OM that captures the adoption of both ANP for ontology matching;• EAF4OM that captures the adoption of EAF for ontology matching;• Matching that captures the ontology matching domain specific concepts applied in the proposed approach;• Ontology, that captures the core concepts of the negotiating ontologies.

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.14 (1-23)

14 P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–•••

Fig. 10. Details of the proposed EAF4OM classes/interfaces through the class diagram.

The EAF package adopts the EAF original layering structure. The conceptual meta-model layer entities are captured bythe classes in the Model Layer package (Argument Model, Argument Type, Statement Type and Reasoner Type). Respectingthe model layer entities, two design approaches were considered: (i) to adopt classes that subclass the meta-model entitiesand (ii) to adopt instances for capturing the diverse types (Argument Type, Statement Type and Reasoner Type). The designdecision fall into the use of instances because the class approach would require manual programming of the classes orrun-time reflexion-based development of classes. Instead, using instances is very simple and straightforward. Further, theproposed API helps abstracting from this. The Instance Layer package captures the original EAF instance layer. The instancesfrom the instance layer are related to the instances in the model layer through the “typeof” relation.

The class diagram depicted in Fig. 10 refines the view of the package diagram of Fig. 9 (all entities are in fact interfacesbut for the sake of simplicity of the diagram the 〈〈interface〉〉 stereotype was dropped). Notice that the classes and methodssigned with asterisk (∗) mean that these are introduced in EAF4OM. Also, notice that OO design best practices led to theadoption of the GoF Strategy pattern in order to represent several evaluation processes (as described in previous sections),namely the statement conflict evaluation process, the preferred extension evaluation process, the arguments evaluation andthe evaluation of the Rsup and Ratt relationships, correspondences interpretation and the decision to either proceed or stopthe negotiation. Accordingly, several interfaces with similar names are proposed.

The Process class in the ANP package captures the core functioning of the ANP process, namely the phase creationand flow control. The processes phases are captured by nine interfaces and several strategies. The class diagram in Fig. 11complements the previous diagrams, depicting the creation relations between the ANP entities and the data structures andstrategies defined in the scope of EAF. Notice that the phases instantiation (i.e. creation, not represented in Fig. 11) isperformed by the “config” method of the Process class through a specific factory (not represented).

Several evolution points are then available for the customization of the framework while maintaining the core principlesof the approach. While the most common evolution points are the Argument Type, Statement Type, Reasoner Type and thestrategies, every phase and data entity is potentially specialized for the domain in hands.

An implementation of the described framework was developed and adopted during experiments for evaluating the com-bination and application of the EAF and ANP in the context of ontology matching.

6. Experiments

In order to evaluate the effectiveness of the proposed approach, an empirical approach was adopted. The experimentsaim to:

• Compare the proposed approach and the state-of-art FDO approach [3] which is an improvement of MbA [4];• Evaluate the systems’ ability to capture dependency between intentional arguments (i.e. correspondences) in the out-

come of the negotiation process;• Evaluate the relevance of the H relations in the outcome of the negotiation process.

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.15 (1-23)

P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–••• 15

Fig. 11. Details of the proposed ANP entities and their creation relation with EAF.

Table 1The test set of ontologies and their characteristics.

Ontology Named classes Object properties Data properties Expressivity

Cmt 36 49 10 ALCIN(D)Conference 60 46 18 ALCHIF(D)ConfOf 38 13 23 SIN(D)Edas 104 30 20 ALCOIN(D)Ekaw 74 33 0 SHINIasted 140 38 3 ALCIN(D)Sigkdd 49 17 11 ALEI(D)

For this, the experiments are analyzed two-fold:

• Measuring the resolved conflicts and its correctness;• Measuring the accuracy of the agreed alignment achieved by the systems through the proposed argumentation process

when compared to the systems’ initial state, i.e. before the argumentation process.

Before this however, the set-up of experiments is described in the next section.

6.1. Experimental set-up

Seven ontologies representing different theories and origins for the same real-world domain (conference organization)and, therefore, reflecting real-world heterogeneity were taken from the OAEI 2011 Conference Track [5] repository (Table 1).Even though other ontologies are available in this repository, they were not used because there is no reference alignmentavailable.

Since the ordering of the ontologies in each possible pair is irrelevant, a total of 21 ontology pairs were identified.However, for the sake of brevity and simplicity, the experiment results are presented considering the negotiation of allindividual alignments as just one huge alignment. Accordingly, the reference alignment contains 305 correspondences whichcorrespond to the sum of the number of correspondences of all reference alignments.

Three distinct systems (further referred to as systems A, B and C) have been conceived, adopting different data ac-quisition, argumentation models, argument generation interpretation functions and evaluation functions depending on theexperimentation scenario depicted in Table 2. All the argumentation scenarios were executed for the systems’ pairs (A, B)and (A, C).

JID:S

CIC

OA

ID:1695

/FLA

[m3G

;v1.128;P

rn:17/02/2014;9:27]P.16(1-23)

16P.M

aio,N.Silva

/ScienceofCom

puterProgram

ming•••

(••••)•••–•••

C

odel Evaluation functions H’s Recl.

Defeasible Indefeasible

{ES, T } k – –{ES, T } k – –f1 k – –f1 k – –f2 k – –f2 k – –f2 k – –f2 k – –f2 k No Nof2 k No Yesf2 k Yes Nof2 k Yes Yes

Table 2The argumentation scenarios during the experimentation.

Sc. System A System B System

Arg. model Evaluation functions H’s Recl. Arg. model Evaluation functions H’s Recl. Arg. m

Defeasible Indefeasible Defeasible Indefeasible

1 EAFC {T ,ES} k – – EAFC {ES, T } k – – EAFC

2 EAFC f1 k – – EAFC {ES, T } k – – EAFC

3 EAFC {T ,ES} k – – EAFC f1 k – – EAFC

4 EAFC f1 k – – EAFC f1 k – – EAFC

5 EAFC f2 k – – EAFC f2 k – – EAFC

6 EAFDC f2 k – – EAFC f2 k – – EAFC

7 EAFC f2 k – – EAFDC f2 k – – EAFDC

8 EAFDC f2 k – – EAFDC f2 k – – EAFDC

9 EAF A f2 k No No EAF B f2 k No No EAFC

10 EAF A f2 k Yes No EAF B f2 k No Yes EAFC

11 EAF A f2 k No Yes EAF B f2 k Yes No EAFC

12 EAF A f2 k Yes Yes EAF B f2 k Yes Yes EAFC

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.17 (1-23)

P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–••• 17

Fig. 12. Partial representation of two argumentation models: a) the community argumentation model (EAFC ) and b) the community argumentation modelextended to capture dependency between correspondences (EAFDC ).

Fig. 13. Partial representation of the argumentation model adopted by system A (EAF A ).

The first scenario mimics the FDO approach. Since the FDO approach does not have the notion of intentional argument,the argument-instantiation process was constrained to instantiate the intentional arguments with the value of the mostpreferred argument-type of each system (scenarios 2–4 also have this constraint). This guarantees that every intentionalargument-instance is supported by the most preferred argument-instance. The other scenarios (2–8) exploit the EAF-basedapproach’s feature concerning the adoption of argument evaluation functions instead of preferences on argument-types.Additionally, scenarios 9–12 have been set based on (i) the systems ability to exchange arguments through the H relationsand (ii) on the systems ability to reclassify terminological argument-instances.

Five different argumentation models have been used in the experiments. EAFc (Fig. 12a) is a simplified version of theargumentation model previously introduced in Fig. 3. Notice that intentional and non-intentional arguments are representedby rounded and non-rounded rectangles respectively, and statements are represented by dashed rectangles. EAFDC (Fig. 12b)is the systems’ private extension of EAFC in order to introduce an R relation between arguments MatchArg and ExtStruc-turalArg. Further, to show the relevance of the H relations (H A , H S and H M ), it has been decided to use the EAFDC asthe common argumentation model to all systems. Additional, this argumentation model (EAFDC) has been extended differ-ently and privately by each system: EAF A (Fig. 13), EAF B (Fig. 14) and EAFC (Fig. 15). The elements filled in gray are thosebelonging to the community argumentation model.

To foster the exchange of arguments, H A and H S relations have been established between the new elements and theelements of the common argumentation model. Given that, instances of these new arguments might be exchanged asinstances of the terminological argument by systems’ internal reclassification as captured in Table 3. Notice that this tablereflects the systems’ internal and private knowledge, thus a system does not know the reclassification rules of the opponent.

As an example, argument-instances of type EAF A : LexicalLabelArg are exchanged as instances of EAFDC :TerminologicalArgwhich are further reclassified as (i) instances of EAF B : WNLabelArg by system B and (ii) as instances of EAFC : LabelArg bysystem C.

Concerning the data acquisition phase, the correspondences between each pair of ontologies were generated accordingto the matchers mentioned in Tables 4, 5 and 6 for systems A, B and C, respectively.

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.18 (1-23)

18 P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–•••

Fig. 14. The argumentation model adopted by system B (EAF B ).

Fig. 15. The argumentation model adopted by system C (EAFC ).

Table 3Reclassification of arguments exchanged as terminological.

Original argument type sent Reclassified as

EAF A : LexicalLabelArg EAF B :WNLabelArgEAF A : LexicalLabelArg EAFC : LabelArgEAF A : SyntacticLabelArg EAFC : LabelArgEAF B : SoundexLabelArg EAF A : LabelArgEAF B :WNLabelArg EAF A : LexicalLabelArgEAFC : LabelArg GC1 EAF A : LabelArgEAFC : LabelArg GC2 EAF A : LexicalLabelArg

These tables also represent the interpretation functions and the thresholds adopted by the respective systems to gen-erate arguments, considering that: (i) the correspondence content can be anything (e.g. correspondence between concepts,between properties, between concept and property) and (ii) the reasoning mechanism is Heuristic (cf. [17] for details).

Concerning the argument evaluation phase, two dimensions have to be considered. First, it regards the argument evalu-ation functions used by the systems. These are described in the “Evaluation functions” column of Table 2 according to theargument defeasibility: defeasible or indefeasible. The evaluation function defined as {T ,ES} or {ES, T } means that argu-ments are evaluated according to the FDO approach where terminological arguments (T ) are preferred to external structuralarguments (ES) or vice versa, respectively. Function f1 counts the number of support relationships (nsup) and the number ofattack relationships (natt) of the argument-instance being evaluated (x), such that:

f1(x,mapV i−1) =⎧⎨⎩

1: nsup(x) > natt(x)−1: natt(x) > nsup(x)0: otherwise

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.19 (1-23)

P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–••• 19

Table 4The interpretation function of system A.

ID Matcher description Statement type tr+ tr−G A1 WNMatcher [32] LexicalLabelSt 1.00 1.00G A2 String-distance [33] SyntacticalLabelSt 0.75 0.75G A3 V-Doc [34] LabelSt 0.70 0.70G A4 Max(G A1, G A2)a TerminologicalSt 0.80 0.80G A5 GMO [35] ExtStructuralSt 0.50 0.50G A6 Falcon-AO [33] MatchSt 0.70 0.70

a Corresponds to the aggregation of the alignments outputted by the input matching algorithms through the max function.

Table 5The interpretation function of system B.

ID Matcher description Statement type tr+ tr−G B1 Soundex [36]a SoundexLabelSt 0.75 0.75G B2 WNPlusMatcher [32] WNLabelSt 1.00 1.00G B3 OWA(G B1, G B2,BiGramb)c TerminologicalSt 0.60 0.60G B4 StructureMatcher [32] ExtStructuralSt 0.70 0.70G B5 Max(G B2, SMOA [39]) MatchSt 0.25 0.25

a Implemented in the SimMetrics project available at http://sourceforge.net/projects/simmetrics/.b Corresponds to the string-based matching algorithm available in the SimPack [37] that exploits the frequency of substrings with length 2.c Corresponds to the aggregation of the alignments outputted by the input matching algorithms through the OWA operator [38].

Table 6The interpretation function of system C.

ID Matcher description Statement type tr+ tr−GC1 Levenshtein [40] LabelSt 0.75 0.75GC2 WNPlusMatcher [32] LabelSt 1.00 1.00GC3 Avg(GC1, GC2, SMOA)a TerminologicalSt 0.70 0.70GC4 Avg(G B4, SMOA) ExtStructuralSt 0.80 0.80GC6 Op(Max(GC2, SMOA, G B4))b MatchSt 0.25 0.25

a Corresponds to the aggregation of the alignments outputted by the input matching algorithms through the linear average function.b Corresponds to the global optimization of the input alignment by the Hungarian method [9].

Function f2 returns the weighted average between (i) the strength value of the argument-instance being evaluated (x)and (ii) the normalized difference between the sum of the strength value of all argument-instances supporting it and thesum of the strength value of all argument-instances attacking it, such that:

f2(x,mapV i−1) = 1

3mapV x

i−1 + 2

3

(( ∑yRsupx

mapV yi−1 −

∑yRattx

mapV yi−1

)/|yRx|

)

Second, regarding the selection of a preferred extension by the systems, it was defined a common criterion shared by allsystems. This criterion states that if more than one preferred extension is generated, the system must adopt the preferredextension that differs less from the one adopted in the previous iteration of the negotiation process. By using this criterion,the systems (i) become more consistent with the position previously assumed in the negotiation since (ii) they do not giveup from their initial position so easily.

6.2. Results

With respect to the resolution of conflicts, the results of the argument-based negotiation for the systems’ pairs (A, B)and (A, C) are depicted in Table 7. The table shows: (i) the initial amount of conflicts existing between the two systemsbefore the argumentation process execution, (ii) the amount of conflicts resolved during the argumentation process, (iii) theamount of remaining conflicts after the argumentation process, (iv) the percentage of resolved conflicts and (v) the percent-age of conflicts correctly resolved and (vi) badly resolved, both regarding the amount of resolved conflicts.

Regarding the alignment accuracy, Table 8 summarizes and characterizes two kinds of alignments: (i) the alignmentgenerated by each system before the argument-based negotiation process and (ii) the agreed alignment obtained in eachscenario after the argument-based negotiation process. Each alignment is characterized qualitatively by presenting the ac-curacy measures Precision, Recall and F-Measure. It is worth noticing that the systems’ alignment before the negotiationprocess comes from the preferred extension evaluated by the system on the argument evaluation phase of first iteration ofthe proposed process, which only considers the arguments generated by the system itself. Arguments put forward by thecounter-part system are just considered in all subsequent iterations of the process. This means that each systems exploits

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.20 (1-23)

20 P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–•••

Table 7Analysis of the conflicts between systems.

Sc. System A vs. system B System A vs. system C

Number of conflicts Conflict resolved (%) Number of conflicts Conflict resolved (%)

Initial Resolved Remain Total Correctly Badly Initial Resolved Remain Total Correctly Badly

1 1319 0 1319 0.00 0.00 0.00 493 0 493 0.00 0.00 0.002 1319 88 1231 6.67 67.05 32.95 493 51 442 10.34 78.43 21.573 1319 170 1149 12.89 98.82 1.18 493 75 418 15.21 98.67 1.334 1319 258 1061 19.56 87.98 12.02 493 126 367 25.56 90.48 9.525 995 769 226 77.29 93.76 6.24 360 221 139 66.39 90.50 9.506 995 742 253 74.57 95.15 4.85 360 213 147 59.17 94.37 5.637 995 740 255 74.37 95.27 4.73 356 206 150 57.87 93.69 6.318 995 757 238 76.08 94.06 5.94 356 219 137 61.52 93.15 6.859 293 3 290 1.02 100.00 0.00 50 24 26 48.00 75.00 25.00

10 293 243 50 82.94 89.30 10.70 50 50 0 100.00 66.00 34.0011 293 29 264 9.90 75.86 24.14 50 37 13 74.00 67.57 32.4312 293 257 36 87.71 89.88 10.12 50 38 12 76.00 71.05 28.95

Table 8Summary and characterization of the alignments.

Sc. System A System B System C System A vs. system B System A vs. system C

Prec. Rec. F-M. Prec. Rec. F-M. Prec. Rec. F-M. Prec. Rec. F-M. Prec. Rec. F-M.

1 57.70 57.70 57.70 11.86 56.39 19.60 27.49 63.28 38.33 68.35 48.85 56.98 64.98 54.75 59.432 57.70 57.70 57.70 11.86 56.39 19.60 27.49 63.28 38.33 67.87 49.18 57.03 64.62 55.08 59.473 57.70 57.70 57.70 11.86 56.39 19.60 27.49 63.28 38.33 67.73 48.85 56.76 64.98 54.75 59.534 57.70 57.70 57.70 11.86 56.39 19.60 27.49 63.28 38.33 67.26 49.18 56.82 64.62 55.08 59.475 57.91 56.39 57.14 18.44 72.79 29.42 35.13 57.70 43.67 65.25 55.41 59.93 66.07 48.52 55.956 57.91 56.39 57.14 18.44 72.79 29.42 35.13 57.70 43.67 65.35 54.43 59.39 66.96 49.84 57.147 57.91 56.39 57.14 18.44 72.79 29.42 35.21 57.38 43.64 65.25 55.41 59.93 66.07 48.52 55.958 57.91 56.39 57.14 18.44 72.79 29.42 35.21 57.38 43.64 63.77 57.70 60.59 67.09 52.13 58.679 76.26 49.51 60.04 38.68 57.70 46.32 76.14 43.93 55.72 80.87 48.52 60.66 78.49 47.87 59.47

10 76.26 49.51 60.04 38.68 57.70 46.32 76.14 43.93 55.72 80.77 48.20 60.37 76.50 50.16 60.5911 76.26 49.51 60.04 38.68 57.70 46.32 76.14 43.93 55.72 81.15 50.82 62.50 78.49 47.87 59.4712 76.26 49.51 60.04 38.68 57.70 46.32 76.14 43.93 55.72 81.08 49.18 61.22 78.19 48.20 59.63

argumentation for both: (i) reasoning about what to believe (e.g. determining the system initial alignment) and (ii) to es-tablish an agreement with other systems (e.g. determining the agreed alignment). Therefore, the systems’ alignment beforethe negotiation process may change from one scenario to another due to changes in the adopted argumentation model, dataacquisition, argument generation and argument evaluation processes.

6.3. Analysis and discussion

The examination of these results shows that in the FDO approach (scenario 1) systems were not able to resolve anyconflict. This occurs because of the argument evaluation process of FDO approach – whenever a system is able to generatean argument-instance of its preferred argument-type, the best the opponent system can do is send an argument-instance ofthe same type but with an opposite position. I.e. a system says a and the opponent system negates a(¬a). In this case, thesystem would obtain two possible preferred extensions. Due to the settled criterion on the preferred extension selection thesystem opts for the preferred extension that maintains its previous position. Since none of the conflicts between the systemsare resolved during the argumentation, the agreed alignment corresponds exactly to the intersection of the alignmentsdevised by the systems before the argumentation process runs.

On the other hand, in all the other scenarios (where at least one feature of the proposed EAF-based approach is exploited)systems were always able to resolve some conflicts. For example, by changing the argument evaluation function of thedefeasible arguments only (scenarios 2 to 5) the rate of resolved conflicts varies from 6.67% to 77.29%. Yet, it is perceivablethat independently of the amount of resolved conflicts the percentage of conflicts correctly resolved is always very high(66% in the worst case).

Comparing alignment accuracy in terms of f-measure achieved by the FDO approach (scenario 1) with the accuracy ofthe agreed alignments in all the other scenarios, one realizes that alignment accuracy varies positively (∼ 5.5%) or negatively(∼ 3.5%) at the maximum. These small variations occur at the same time that the conflicts are resolved, which allow us toperceive two issues:

• The great difficulty that a system has to persuade its opponent to accept the inclusion of a given correspondence in theagreement; which arises from

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.21 (1-23)

P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–••• 21

• The lack of evidences supporting the inclusion of a given correspondence in the agreement in contrast with the evi-dences against such inclusion.

Thus, it becomes clear that contrary to the FDO approach, the proposed EAF-based approach is able to resolve conflictseven when the argumentation skills of the systems are very limited (only two kinds of arguments exist). Simultaneously tothe conflict resolution, the accuracy of the agreed alignment improves.

Yet, it is important to bear in mind that saying an agreed alignment is more or less sound than other depends ontwo metrics: (i) the resolved conflicts and its correctness and (ii) the alignment accuracy. However, since the proposednegotiation process aims to resolve conflicts, one may argue that an agreed alignment with a high level of correctly resolvedconflicts can be taken (i.e. exploited by the computational systems) with more confidence than the same alignment witha lower level of correctly resolved conflicts. Thus, variations in alignment accuracy assess the impact of conflict resolutioninto the quality of alignment. Another important fact concerns the initial amount of conflicts. It is perceivable that aslong as the individual systems’ matching abilities evolve (e.g. by adopting extended argumentation models) (i) the initialamount of conflicts between systems decreases and (ii) the accuracy of the alignment devised by the systems before theargumentation process improves. In this respect, the adoption of EAFDC by systems C (scenarios 7 and 8) leads to a reductionof correspondences from its previous initial alignment (from 360 to 356).

Regarding the dependency between correspondences feature, by comparing the results of scenario 5 where none of thesystems exploit the dependency feature with the scenarios where at least one of the systems exploits such feature (scenarios6 to 8), it is perceivable that:

• The amount of resolved conflicts slightly decreased;• The percentage of conflicts correctly resolved slightly increased;• The f-measure of the agreed alignment in scenarios 6 to 8 is greater than the one achieved in the scenario 5. This is

even more evident in scenario 8 where the two systems are exploiting simultaneously the dependency feature.

The combination of these three facts allows the conclusion that the dependency feature helps to improve the quality ofboth (i) the resolved conflicts and (ii) the accuracy of the agreed alignment.

The usefulness of the H relations feature should be measured in combination with the systems’ ability to reclassifyarguments. Hence, by comparing the amount of resolved conflicts and its correctness between scenario 9 (where none ofthese features are exploited) and scenario 12 (where both features are exploited), it becomes evident the usefulness of thesetwo features for conflicts resolution. The results of scenarios 10 and 11 when compared to the results of scenario 9 allowconcluding about the persuasiveness of the system that is exploiting the H relations. While system A was very persuasiveagainst both systems, system C was also very persuasive (but less than system A). Instead, system B was inefficient sincethe amount of resolved conflicts grows from 1.02% to 9.90% only. Similarly, the agreed alignment of scenarios 10 to 12is better or equal to the agreed alignment in scenario 9. The exception is scenario 10 for the systems’ pair (A, B). Thus,considering the accuracy of the agreed alignments and the quantity and quality of the resolved conflicts, one may concludethat establishing H relations in the systems’ private argumentation model is useful in the case where the opponent systemis able to reclassify the argument-instances exchanged based on that feature. This might also be seen by the systems as anindication to refine the community argumentation model as foreseen in the adopted ANP.

Finally, comparing in terms of f-measure the alignment devised individually by the systems with the agreed alignment,it becomes clear that systems profit from the argumentation process:

• System A is the one that profits less since it has f-measure disparities from approximately −1.5% to +3.4%. This occursbecause system A is able to generate the best initial alignments and it is very confident on such alignments. Despitethis, in most of the scenarios it has profits instead of losses;

• System B has f-measure improvements that vary from approximately +14% to +36%;• System C has f-measure improvements that vary from approximately +4% to +21%.

These f-measure improvements happen at the same time the conflicts are resolved.

7. Conclusions

The primary emphasis of the research presented in this paper focuses on proposing a novel argument-based negotia-tion approach that enables computational systems to resolve their ontology matching divergences. The proposed approachcombines and applies to the ontology matching negotiation problem the core concepts of two generic artifacts: (i) ANP [15]as the negotiation process and (ii) EAF [16] as the argumentation framework. It is our conviction that either ANP and EAFare suitable for many negotiation scenarios/domains, including e-commerce and web services selection. In this respect, theproposed approach is captured into a software development framework that allows a driven and easy development of suchfeatures in diverse computation systems. This framework proved to be suitable and versatile by providing the mechanismto perform extensive and diverse experiments for evaluating the proposal in respect to the state-of-the-art approaches. Tooutperform ontology matching negotiation state-of-the-art approaches, the proposed approach followed a different line of

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.22 (1-23)

22 P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–•••

research. However, it allows to mimic such approaches as demonstrated during the experiments. Furthermore, the proposedapproach goes beyond the state-of-the-art approaches in the following:

1. It encourages computational systems to employ private arguments in their internal reasoning process by privately ex-tending the public argumentation model. This feature relies on the EAF specific constructs and semantics;

2. It is possible to take into consideration dependencies between correspondences under negotiation by explicitly capturingsuch dependencies in the model layer through the R relation between intentional arguments. An intentional argumentis a fully-fledged argument corresponding to an object under negotiation, which can be affected (either directly orindirectly) by other intentional arguments;

3. It adopts an argument evaluation process allowing computational systems to express more complex and flexible pref-erences on arguments. This is supported by the ability to apply arbitrarily complex domain dependent or independentevaluation functions that will typically exploit the adopted argumentation model, namely the R relation;

4. It allows to easily adapt and evolve the approach to support scenarios with different requirements, namely concerningthe amount and types of argument that are plausible computational systems to exploit.

Considering the previous exposition, it is believed that the proposed contributions exceeded the state-of-the-art whileproviding a formal yet pragmatic software development framework for its application in diverse computational systems.

Additionally, the presented experiments proved that the proposed argument-based negotiation process performs betterthan the state-of-the-art on ontology matching argument-based negotiation approaches both quantitatively and qualita-tively regarding the resolved conflicts and the accuracy of the agreed alignment. Moreover, the improvements to thestate-of-the-art demonstrate the needs and benefits of adopting an explicit, formal and extensible specification of a sharedargumentation model in order to resolve conflicts and achieve better agreements. The proposed ideas depend on several fac-tors such as (interpretation of) matchers, argumentation models and their modeling methodologies and evaluation functionsdesign. While these factors are indeed important and constrain the adoption of the proposed ideas, they are not systemat-ically addressed in this paper and they will deserve our future attention. Despite that, they are supported by the providedsoftware development framework.

Acknowledgements

This work is partially supported by the Portuguese projects: COALESCE (PTDC/EIA/74417/2006) of MCTES-FCT and WorldSearch (QREN11495) of FEDER. The authors would like to acknowledge Jorge Santos, Maria João Viamonte, Jorge Coelho andBesik Dundua for their useful counsels and Jane Walker for her revision of the document.

References

[1] J. Euzenat, P. Shvaiko, Ontology Matching, 1st ed., Springer-Verlag, Heidelberg, Germany, 2007.[2] N. Silva, P. Maio, J. Rocha, An approach to ontology mapping negotiation, in: Workshop on Integrating Ontologies of the Third International Conference

on Knowledge Capture, Banff (Alberta), Canada, 2005.[3] P. Doran, T. Payne, V. Tamma, I. Palmisano, Deciding agent orientation on ontology mappings, in: 9th International Semantic Web Conference (ISWC),

2010.[4] L. Laera, I. Blacoe, V. Tamma, T.R. Payne, J. Euzenat, T. Bench-Capon, Argumentation over ontology correspondences in MAS, in: 6th International Joint

Conference on Autonomous Agents and Multiagent Systems (AAMAS 2007), Honolulu, Hawaii, USA, 2007, p. 228.[5] OAEI, Ontology Alignment Evaluation Initiative, 2011 Campaign, available online: http://oaei.ontologymatching.org/2011/, 2011.[6] E. Rahm, P.A. Bernstein, A survey of approaches to automatic schema matching, VLDB J. 10 (4) (2001) 334–350.[7] P. Shvaiko, J. Euzenat, A survey of schema-based matching approaches, J. Data Semant. IV (2005) 146–171.[8] D. Gale, L.S. Shapley, College admissions and the stability of marriage, Am. Math. Mon. 69 (1) (1962) 5–15.[9] J. Munkres, Algorithms for the assignment and transportation problems, J. Soc. Ind. Appl. Math. 5 (1) (Mar. 1957) 32–38.

[10] D.H. Ngo, Z. Bellahsene, R. Coletta, et al., A flexible system for ontology matching, in: S. Nurcan (Ed.), IS Olympics: Information Systems in a DiverseWorld, Springer, Berlin, Heidelberg, 2012, pp. 79–94.

[11] K. Saruladha, G. Aghila, B. Sathiya, A comparative analysis of ontology and schema matching systems, Int. J. Comput. Appl. 34 (8) (2011) 14–21.[12] P. Maio, N. Silva, GOALS – A test-bed for ontology matching, in: 1st IC3K International Conference on Knowledge Engineering and Ontology Develop-

ment (KEOD), Funchal (Madeira), Portugal, 2009, pp. 293–299.[13] T. Bench-Capon, Persuasion in practical argument using value-based argumentation frameworks, J. Log. Comput. 13 (3) (2003) 429–448.[14] M. Wooldridge, An Introduction to MultiAgent Systems, 2nd ed., Wiley, 2009.[15] P. Maio, N. Silva, J. Cardoso, Iterative, incremental and evolving EAF-based negotiation process, in: T. Ito, M. Zhang, V. Robu, T. Matsuo (Eds.), Complex

Automated Negotiations: Theories, Models, and Software Competitions, Springer, Berlin, Heidelberg, 2013, pp. 161–179.[16] P. Maio, An extensible argumentation model for ontology matching negotiation, Ph.D. thesis, University of Trás-os-Montes, Vila Real, Portugal, 2012.[17] P. Maio, N. Silva, J. Cardoso, Generating arguments for ontology matching, in: 10th International Workshop on Web Semantics (WebS) at DEXA,

Toulouse, France, 2011, pp. 239–243.[18] P. Maio, N. Silva, A three-layer argumentation framework, in: S. Modgil, N. Oren, F. Toni (Eds.), Theories and Applications of Formal Argumentation,

vol. 7132, Springer, Berlin, Heidelberg, 2012, pp. 163–180.[19] P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games, Artif.

Intell. 77 (2) (1995) 321–357.[20] C. Cayrol, M.C. Lagasquie-Schiex, On the acceptability of arguments in bipolar argumentation frameworks, in: Symbolic and Quantitative Approaches to

Reasoning with Uncertainty, 2005, pp. 378–389.[21] D. Walton, Argumentation theory: a very short introduction, in: Argumentation in Artificial Intelligence, Springer Publishing Company, Incorporated,

2009.

JID:SCICO AID:1695 /FLA [m3G; v 1.128; Prn:17/02/2014; 9:27] P.23 (1-23)

P. Maio, N. Silva / Science of Computer Programming ••• (••••) •••–••• 23

[22] M.E. Bratman, Intention, Plans and Practical Reason, Harvard University Press, Cambridge, MA, 1987.[23] T.R. Gruber, A translation approach to portable ontology specifications, Knowl. Acquis. 5 (2) (1993) 199–220.[24] T. Gruber, What is an ontology?, available online: http://www-ksl.stanford.edu/kst/what-is-an-ontology.html.[25] H. Prakken, An abstract framework for argumentation with structured arguments, Argument & Computation 1 (2) (2010) 93.[26] P. Baroni, M. Giacomin, Semantics of abstract argument systems, in: Argumentation in Artificial Intelligence, 2009, pp. 25–44.[27] C. Cayrol, M.C. Lagasquie-Schiex, Coalitions of arguments: A tool for handling bipolar argumentation frameworks, Int. J. Intell. Syst. 25 (1) (Jan. 2010)

83–109.[28] C. Cayrol, M.C. Lagasquie-Schiex, Gradual valuation for bipolar argumentation frameworks, in: Symbolic and Quantitative Approaches to Reasoning with

Uncertainty, 2005, pp. 366–377.[29] L. Amgoud, C. Cayrol, M.C. Lagasquie-Schiex, P. Livet, On bipolarity in argumentation frameworks, Int. J. Intell. Syst. 23 (10) (Oct. 2008) 1062–1093.[30] N. Karacapilidis, D. Papadias, Computer supported argumentation and collaborative decision making: the Hermes system, Inf. Syst. 26 (2001) 259–277.[31] B. Verheij, On the existence and multiplicity of extensions in dialectical argumentation, arXiv:cs/0207067, July 2002.[32] Y. Kalfoglou, B. Hu, N. Shadbolt, D. Reynolds, CROSI-capturing representing and operationalising semantic integration, available online: http://

www.aktors.org/crosi/, 2005.[33] N. Jian, W. Hu, G. Cheng, Y. Qu, Falcon-AO: aligning ontologies with falcon, in: Proceedings of the K-CAP Workshop on Integrating Ontologies, Banff,

Canada, 2005, pp. 87–93.[34] Y. Qu, W. Hu, G. Cheng, Constructing virtual documents for ontology matching, in: Proceedings of the 15th International Conference on World Wide

Web, 2006, pp. 23–31.[35] W. Hu, N. Jian, Y. Qu, Q. Wang, GMO: a graph matching for ontologies, in: Proceedings of the K-CAP Workshop on Integrating Ontologies, Banff, Canada,

2005, pp. 43–50.[36] R.C. Russell, US Patent 1261167 (A), 2 Apr. 1918.[37] A. Bernstein, E. Kaufmann, C. Kiefer, C. Burki, SimPack: a generic Java library for similarity measures in ontologies, Technical report, University of

Zurich, Department of Informatics, 2005.[38] Q. Ji, P. Haase, G. Qi, Combination of similarity measures in ontology matching using the OWA operator, in: Proceedings of the 12th International

Conference on Information Processing and Management of Uncertainty in Knowledge-Base Systems (IPMU’08), 2008.[39] G. Stoilos, G. Stamou, S. Kollias, A string metric for ontology alignment, in: The Semantic Web – ISWC 2005, 2005, pp. 624–637.[40] V. Levenshtein, Binary codes capable of correcting deletions, insertions, and reversals, Dokl. Akad. Nauk SSSR 163 (4) (1965) 845–848.