12
Analyses for Service Interaction Networks with applications to Service Delivery S. Kameshwaran Sameep Mehta Vinayaka Pandit Gyana Parija Sudhanshu Singh §¶ N. Viswanadham Abstract One of the distinguishing features of the services industry is the high emphasis on people interacting with other peo- ple and serving customers rather than transforming physical goods like in the traditional manufacturing processes. It is evident that analysis of such interactions is an essential as- pect of designing effective and efficient services delivery. In this work we focus on learning individual and team behav- ior of different people or agents of a service organization by studying the patterns and outcomes of historical interactions. For each past interaction, we assume that only the list of par- ticipants and an outcome indicating the overall effectiveness of the interaction are known. Note that this offers limited information on the mutual (pairwise) compatibility of differ- ent participants. We develop the notion of service interac- tion networks which is an abstraction of the historical data and allows one to cast practical problems in a formal setting. We identify the unique characteristics of analyzing service interaction networks when compared to traditional analyses considered in social network analysis and establish a need for new modeling and algorithmic techniques for such net- works. On the algorithmic front, we develop new algorithms to infer attributes of agents individually and in team settings. Our first algorithm is based on a novel modification to the eigen-vector based centrality for ranking the agents and the second algorithm is an iterative update technique that can be applied for subsets of agents as well. One of the challenges of conducting research in this setting is the sensitive and pro- prietary nature of the data. Therefore, there is a need for a realistic simulator for studying service interaction networks. We present the initial version of our simulator that is geared to capture several characteristics of service interaction net- works that arise in real-life. * Contact: (sameepmehta, pvinayak)@in.ibm.com Center for Global Logistics and Manufacturing Strategies (GLAMS), Indian School of Business, Hyderabad IBM India Research Laboratory, New Delhi § University of North Carolina, Chapel Hill The author was with IBM India Research Lab when the work was done. 1 Introduction One of the distinguishing features of the services industry is the high emphasis on people interacting with other peo- ple and serving customers rather than transforming physi- cal goods in the process. The focus of this paper is on an- alyzing patterns of interactions in services delivery to help understand the impact of individuals and their interactions on the quality/effectiveness. A typical service creation can be thought of as a workflow consisting of various stages of specialized activities. Therefore, service creation involves people, each performing the specialized tasks of the stage that he or she is responsible for, and communicating with others who are performing related/dependent activities. It should be emphasized that we use the term workflow in a broader sense than the specific notions they carry in special- ized literature such as SOA. We refer to the people or en- tities involved in performing different stages of a workflow as agents. Ideally, the organization should capture all the details of the execution which includes individual agent per- formance, the quality of interaction among agents etc. How- ever, issues like confidentiality, lack of visibility and in some cases lack of control, e.g., presence of Independent Service Vendors (ISVs) in workflow execution, makes it improba- ble (if not impossible) to capture the details at such micro level. Typically, the organizations can track only macro level features like success/failure, cost etc of a workflow. Merely aggregating the macro-level measurements offers limited in- sight into the individual capabilities of the agents and the ef- fectiveness of their interaction. In this work, we consider the issue of discovering individual traits of agents based on mea- surements at the level of workflows. Specifically, our goal is to develop techniques that give better insight than those ob- tained by simple aggregating across all workflows. We begin by considering some examples where analysis of such macro level data can prove to be useful. 1.1 Examples Let us consider the case of an orchestrator of supply chains. Orchestrator is a management literature metaphor to describe the role of a player who organizes and manages a set of activities in a network by ensuring value- creation opportunities in the system and value appropriation 377 Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

14 Current Cardiology Reviews, 2012, 8, 14-25 On the Analysis of

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 14 Current Cardiology Reviews, 2012, 8, 14-25 On the Analysis of

Analyses for Service Interaction Networks with applications to ServiceDelivery ∗

S. Kameshwaran† Sameep Mehta‡ Vinayaka Pandit‡ Gyana Parija‡

Sudhanshu Singh§¶ N. Viswanadham†

Abstract

One of the distinguishing features of the services industryis the high emphasis on people interacting with other peo-ple and serving customers rather than transforming physicalgoods like in the traditional manufacturing processes. It isevident that analysis of such interactions is an essential as-pect of designing effective and efficient services delivery. Inthis work we focus on learning individual and team behav-ior of different people or agents of a service organization bystudying the patterns and outcomes of historical interactions.For each past interaction, we assume that only the list of par-ticipants and an outcome indicating the overall effectivenessof the interaction are known. Note that this offers limitedinformation on the mutual (pairwise) compatibility of differ-ent participants. We develop the notion ofservice interac-tion networkswhich is an abstraction of the historical dataand allows one to cast practical problems in a formal setting.We identify the unique characteristics of analyzing serviceinteraction networks when compared to traditional analysesconsidered in social network analysis and establish a needfor new modeling and algorithmic techniques for such net-works. On the algorithmic front, we develop new algorithmsto infer attributes of agents individually and in team settings.Our first algorithm is based on a novel modification to theeigen-vector based centrality for ranking the agents and thesecond algorithm is an iterative update technique that can beapplied for subsets of agents as well. One of the challengesof conducting research in this setting is the sensitive and pro-prietary nature of the data. Therefore, there is a need for arealistic simulator for studying service interaction networks.We present the initial version of our simulator that is gearedto capture several characteristics of service interaction net-works that arise in real-life.

∗Contact: (sameepmehta, pvinayak)@in.ibm.com†Center for Global Logistics and Manufacturing Strategies (GLAMS),

Indian School of Business, Hyderabad‡IBM India Research Laboratory, New Delhi§University of North Carolina, Chapel Hill¶The author was with IBM India Research Lab when the work was done.

1 Introduction

One of the distinguishing features of the services industryis the high emphasis on people interacting with other peo-ple and serving customers rather than transforming physi-cal goods in the process. The focus of this paper is on an-alyzing patterns of interactions in services delivery to helpunderstand the impact of individuals and their interactionson the quality/effectiveness. A typical service creation canbe thought of as aworkflowconsisting of various stages ofspecialized activities. Therefore, service creation involvespeople, each performing the specialized tasks of the stagethat he or she is responsible for, and communicating withothers who are performing related/dependent activities. Itshould be emphasized that we use the term workflow in abroader sense than the specific notions they carry in special-ized literature such as SOA. We refer to the people or en-tities involved in performing different stages of a workflowas agents. Ideally, the organization should capture all thedetails of the execution which includes individual agent per-formance, the quality of interaction among agents etc. How-ever, issues like confidentiality, lack of visibility and in somecases lack of control, e.g., presence of Independent ServiceVendors (ISVs) in workflow execution, makes it improba-ble (if not impossible) to capture the details at such microlevel. Typically, the organizations can track only macro levelfeatures like success/failure, cost etc of a workflow. Merelyaggregating the macro-level measurements offers limited in-sight into the individual capabilities of the agents and the ef-fectiveness of their interaction.In this work, we consider theissue of discovering individual traits of agents based on mea-surements at the level of workflows. Specifically, our goal isto develop techniques that give better insight than those ob-tained by simple aggregating across all workflows.We beginby considering some examples where analysis of such macrolevel data can prove to be useful.

1.1 ExamplesLet us consider the case of an orchestratorof supply chains.Orchestratoris a management literaturemetaphor to describe the role of a player who organizes andmanages a set of activities in a network by ensuring value-creation opportunities in the system and value appropriation

377 Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

Page 2: 14 Current Cardiology Reviews, 2012, 8, 14-25 On the Analysis of

mechanisms for each player. Examples of orchestrators areLi & Fung (Hong Kong) in retail [6] and Medion (Germany)in personal computing [18]. Let us consider the case ofLi&Fung, which orchestrates an innovative service supplychain in apparel industry [10]. In the words of their groupchairman Victor Fung, they “create a customized value chainfor each customer order and orchestrate a production processthat starts from raw materials all the way to the finished prod-ucts”. In other words, they seamlessly bring together thelarge apparel retailers and a global pool of apparel-relatedsuppliers and create value without owning any of the re-sources used in the production. The key differentiator thatLi&Fung does bring in is the intimate knowledge of theglobal service network. Every major order of a retailer getsexecuted by creating a customized supply chain in the globalnetwork of suppliers. Each order typically gets executed bymultiple workflows which go through the complete productcycle and is orchestrated by the domain experts at Li&Fung.Just the revenue of Li&Fung for the services offered is USD11 Billion and the actual value of the retail good is greater byan order of magnitude. Identifying the providers for a par-ticular order is based on network constraints and local eco-nomic, political, public policy, and cost considerations. But,note that each workflow is a coordinated production activityinvolving several independent vendors of different function-ality. Therefore, the effectiveness of their execution is highlydependent on their interactions and in turn should affect howfuture workflows are composed. Studying the past orders, interms of the vendors involved and the overall effectivenessof their delivery can potentially provide extremely relevantfeedback about their performance in such a loosely coordi-nated setting.

Let us now consider the case of distributed softwaredevelopment. A typical software service firm delivers itsservices from geographically distributed locations. Teamsat each location possess expertise in different functionali-ties from a broad domain. Examples of the domains couldbe product design and architecture, development capabilityin different technical disciplines, integration, testing, anddebugging. Typical projects flow through different func-tionality modules of these domains at various stages of theproject in a precedence oriented fashion. Parameters asso-ciated with a project such as, overheads incurred, efficiency,success etc. depend heavily on the behavioral aspects of thedifferent teams in distributed locations and effectiveness ofcommunication/coordination between related modules. Typ-ically, the project manager ensures that the teams have therequired skill set tosuccessfullycomplete the project. How-ever, choosing team with the sole intention of matching skillset demand may not produce optimal teams. In this scenario,we can view each project as a workflow and each team asan agent as it is responsible for basic units of the workflows.Organizational constraints allow us to measure the effective-

ness of each project only at a macro level rather than at thelevel of how individual teams performed at various stages.A firm can, not only gain better understanding of the teamdynamics, but also arrive at better team compositions in fu-ture by analyzing the macro level data corresponding to theagents and the workflows in innovative ways. Recently, sev-eral research papers in the area of software engineering haveconsidered these and other related issues in distributed soft-ware development [21, 20, 22].

1.2 Salient FeaturesAt this stage, we would like to high-light some of the salient features of the examples that weconsidered. In the case of Li&Fung in which the orches-tration is over thousands of suppliers distributed globally,the team compositions are not often repeated unlike pro-duction environments where the team compositions repeaton a routine basis. The duration of each interaction is alsoquite short in comparison to those in production environ-ments. Therefore, it is essential to take into account the ef-fect of each workflow on the effectiveness of the overall ser-vice composition and delivery. We also observe that micro-monitoring of individual agents is not possible due to vari-ous reasons. In case of Li&Fung, the agents are independententities and hence very difficult to control and monitor ata micro-level. In case of an intra-organization service chainlike the software development scenario, micro-monitoring ofaspects like effectiveness of communication across two de-pendent stages of the project could affect the team moraleand also increase cost of project management. Therefore, weassume that only macro-level assessment of each workflowis available. Each workflow could be designated as havingmet or not met certain acceptable criteria or there could be agradation between 0 and 1.

1.3 Contrast with related literature If we view theagents as entities in a network and each workflow as a re-lation between entities, the service delivery scenarios looksimilar to the traditional social networks. We briefly high-light the uniqueness of service delivery scenarios as com-pared to social network analysis (SNA). Traditionally SNAhas on studying networks that arise by social interactions ofpeople, socio-technological networks like WWW, and col-laboration networks of scholarly articles. It has mainly con-centrated on dyadic relations (sometimes even reducing re-lations involving multiple entities to multiple dyadic rela-tions) and discovering structural properties of such networks.Firstly, from the point of view of service delivery, the rela-tions are either hyper-edges over the agents or workflows in-volving multiple agents. Secondly, the focus of the analysisin service delivery setting is on the fact that, with each work-flow, a measure of its effectiveness is recorded. The idea is touse the pattern of these interactions and their effectiveness toinfer micro-level attributes of agents and their interactions.

378 Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

Page 3: 14 Current Cardiology Reviews, 2012, 8, 14-25 On the Analysis of

The current problem domain is also significantly dif-ferent from traditional supply chain processes. What setsservices and industrial/production processes apart are (i) thedegree to which human interactions dominate the total timeand (ii) the degree to which human interactions influence theoverall quality. To understand these differences in detail, es-pecially from the supply chain viewpoint, we point readersto an excellent survey article by Dietrich and Harrison [8].They highlight the novelty of the mathematical techniquesrequired to analyze service delivery.

1.4 Our contribution Informally, service interaction net-worksare networks arising in service delivery, out of agentsand their interactions in executing workflows. As mentionedearlier, the interaction networks provide a generalized frame-work to models a wide range of scenarios originating in ser-vices delivery setting. In this paper, we initiate a systematicstudy of this topic.

Unlike the well known sources of data for SNA such asWWW, DB LP etc., data on service interaction networks ishard to obtain. The main reason is the proprietary nature ofthe data. It contains crucial private data of organizations thatcannot be shared externally. Anonymization of the data isnot a solution either. It would reveal confidential informationabout the organization. For example, a company may notwant to reveal that in its private estimate a certain percent-age of its interactions were unsatisfactory. So, realistic datageneration is a major challenge. Note that generating databy representing simplistic mathematical models for agent be-havior does not work. This is due to the dominance of humanaspects in service delivery. At the same time, randomizedassignment of performance levels for each interaction is nothelpful either. It not only results in highly inconsistent data,but is also not reflective of real-life scenarios. One wouldlike to generate data that is close to real-life and reasonablelevel of consistency. Generating such a data is a researchtopic by itself. Towards this goal, we have developed a sim-ulation framework. The framework can be instantiated togenerate data which reflect various real life scenarios.

Related literature in SNA by computer scientists aswell as sociologists offer starting points for studying serviceinteraction networks. Particularly, the eigen vector basedapproaches seem to be relevant from the point of view ofservice interaction networks. We present several ways ofextending this line of work for analyzing service interactionnetworks. We discuss the advantages and disadvantages ofeach approach.

We also develop an intuitive iterative refinement tech-nique to compute reputations of agents that overcomes someof the limitations of simply using the aggregates. We illus-trate the efficacy of our approach both conceptually and em-pirically on data generated by the simulation framework. Wealso consider the impact of specific “links” on the service de-

livery. For example, agentsa andb might be effective agentsindividually. But, workflows in which a task performed bya is followed byb might not be as effective. This suggeststhat the collaboration betweena and b is not as desirableand should be avoided in future workflow compositions. Ourwork introduces a very important analysis domain for servicedelivery and presents several interesting future directions.

2 Analysis Framework

In this section, we formulate the problems considered in thispaper. For ease of exposition, we call an organization re-sponsible for end-to-end service delivery as the orchestrator.We call entities (be it individuals, teams or independent ser-vice providers) that are responsible for independent units ofwork as agents.

2.1 Service Interaction Networks In our setting there isan orchestratorO who can avail the services of a set ofagentsV = {1, 2, . . . , N}. The orchestratorO specializes inaccomplishing high-level tasks from a broad domain and theagents provide the different functionalities to accomplish thehigh-level tasks. Given a taskt, the orchestrator composes aworkflow consisting of a subset ofV to accomplish the tasks.We assume that the setT = {t1, t2, . . . , T} denotes theset of tasks performed by the orchestrator over a significantperiod of time. In the most general form a workflowWt canbe represented as a graphGt(Vt, At) with verticesVt ⊆ Vand edgesAt between vertices inVt that model the directinteractions between any two actors inVt. However, certaintasks require repeated interactions which are cumbersome torepresent in terms of direct edges. Therefore, we assumethat each task is accomplished by a workflow which isrepresented as a hyper-edge connecting all the involvedagents. Abstractly, the hyper-edge highlights the fact thatan interaction involving its agents took place before the taskcould be accomplished. Associated with each hyper-edgeis an “outcome” which is a measure of the effectivenessof its execution. In general, the outcome could be a realnumber between0 and1. But, our main interest will be inthe special case when the workflows are either “successful”(i.e, 1) or “failure” (i.e, 0). We call the collection ofagents along with the hyper-edges representing the varioustasks and their outcomes asservice interaction networks.An instance of a service interaction network is given by(V , T ,∪∀t∈T (Wt, Rt)) where the tuples(Wt, Rt) associateworkflows with their outcomes. Figure 1 shows an examplewith N = 7 andT = 5.

Typically, workflows are composed by the orchestratorbased on (i) system constraints in terms of capacity and capa-bilities of the agents, and (ii) social, geographical, political,and economic factors. In addition, the orchestrator has totake into account the individual and team dynamics of theagents. As explained in the examples presented in the intro-

379 Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

Page 4: 14 Current Cardiology Reviews, 2012, 8, 14-25 On the Analysis of

duction, it is not always feasible to micro-monitor the work-flow executions. So, one of the main themes of analyzingservice interaction networks is to infer useful informationabout the agents from the high-level measurements acrossworkflows. For example,

• How does an agent impact the success of the workflowsin which it is involved?

• What is the impact of involving a specific subset ofagents on an workflow? Specifically, how does theexistence of a link between two agents affect the successof workflows in which they are involved?

• Are there any dominant or influential agents emergingin the network?

Note that an obvious approach to tackle the abovequestions is based on aggregating. For example, to measurethe effectiveness of an agent, one could just measure thefraction of the successful workflows of all the workflows inwhich the agent is involved. Similarly, we can extend it toassess effectiveness of direct links and larger subsets. Someof the limitations of such an approach are noted below.

Consider a successful and influentialsuper-agenta whoensures that all the workflows that he is involved in succeed.Consider an agentb who is failure-pronewhen teamed withordinary agents. Let 80% of the workflows ofb also containa. Let us further say that a large fraction of the rest ofb’sworkflows have failed. The aggregate based approach wouldinfer thatb is a highly successful agent. However, we wouldlike to infer thatb is prone to failures (and may be needsto be targeted in training programs). A different problemarises when we use aggregates to make transitive inferences.Supposec andd are indeed effective agents. An aggregatebased method would suggest that it is okay to compose aworkflow involvingc andd. However, a closer look at directinteractions betweenc andd may suggest otherwise.

We consider algorithmic approaches to tackle the abovequestions. We would like them to deal with behavioral in-consistencies of agents, a commonly encountered phenom-ena in service delivery settings.

2.2 Related Literature In this section, we will summarizerelated literature and highlight the uniqueness of our formu-lations with respect to previous work. Some of the previousworks need to be examined more closely to ascertain theirapplicability to our formulations. Section 3 considers someof the previous works as starting point for representation andanalyses. The purpose here is to briefly highlight the unique-ness of our work.

The distinction of our setting with the traditional SNAfocusing on structural properties of networks of dyadic rela-tions was highlighted in the introduction. An excellent sur-vey of the structural analyses in SNA can be found in articles

S

1)

Tasks Workflow Outcome

2)

3)

4)

5)

A1A2

A3

A4

A1

A5

A2

A6

A7

A2

A5

A3

A7A4

A1

A2

A5

A6

S

F

S

F

Figure 1: An example withN = 7 actors andT = 5workflows

by Newmann [15, 16, 17]. Some of the well known exam-ples are the studies on the “small world” phenomena in suchnetworks [13] and studies on the “shape” of the WWW [5].Most studies in SNA do not focus on the properties of thelinks themselves. Recently, in the context of analyzing tele-phone call-graphs [14, 19], the weights on the links was usedto discover communities. However, in our case, the label onthe hyper-edge is indicative of the effectiveness of the corre-sponding workflow. Moreover, we are interested in inferringreputations of individual agents and their interactions.

If we denote the agents as nodes and the interactionsbetween them as edges, then we have interaction networks,much similar to social networks.Centrality concepts arewidely used in social networks for studying structural im-portance of nodes in the network. For interaction networks,this translates to quantifying the status of the agents based ontheir interactions. However, application of centrality for in-teractions networks is non-trivial due to the presence of out-come of each workflow and there exists preferential relationsover the outcomes (S is preferred overF ). In the next sec-tion, we show how to construct an interaction network fromthe workflows and how centrality concepts can be applied tointeraction networks.

3 Interaction Network as Graphs

In this section, we model the interaction networks as socialnetworks. The basic mathematical structure for visualizingthe social network is a graph. The agents are representedby nodes with the linkages between the agents as edges.The linkages could arise due to evaluation of one person byanother (friendship, liking, respect), transfer of material re-sources (business transactions, lending or borrowing things),association or affiliation (club membership), behavioral in-teraction (talking together, sending messages), formal rela-tions such as authority, and biological relationships such as

380 Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

Page 5: 14 Current Cardiology Reviews, 2012, 8, 14-25 On the Analysis of

kinship or descent. A linkage establishes a tie at the mostbasic level between a pair of agents. The tie is an inherentproperty of the pair. The linkage in a interaction network isthe quantification of the past interactions between any twoagents that happened in the workflows.

Social network analysis is concerned with understand-ing the linkages among agents and the implications of theselinkages. Some of the popular topics of focus are: (1) quan-tifying the structural importance of agents in the network;(2) identifying the core and peripheral agents; (3) analyzinggroups and sub-groups; and (4) structural measures of so-cial capital to identify what features of network contribute toagents. In this work, our focus is on identifying key agentsbased on the past interactions with the other agents, workingas teams. If the interaction network is modeled as a graphwith agents as nodes, then the edges characterize the strengthof the interaction. Thekeyagents can then be identified bytheir structural importance in the network.

A relevant stream of research in social network analysisthat measures the structural importance of an agent relativeto that of the others in the network iscentrality. Thereare numerous standard measures of centrality [23], eachappropriate for different environments. The four popularcentrality measures aredegree, closeness, betweenness, andeigenvector[9, 2].

• Degree: Centrality of an agent is a function of numberof other agents to which it is adjacent (or directlyconnected).

• Closeness: An agent that can be easily reached fromother agents is more central.

• Betweenness: An agent that lies on more number ofgeodesic paths (shortest paths) between any two agentsin the network has a higher centrality score.

• Eigenvector: The centrality of an agents depends on thecentrality of the agents to which it is adjacent.

The closeness and betweenness measures are appropriatefor networks that model flows and transmissions. Degreecentrality is concerned withhow many others an agent islinked with. For interaction networks, the centrality of anagent depends onwhomhe interacts with andhow manyofthem. Eigenvector centrality [2] is the appropriate measurefor the interaction networks. It takes into account thecentralities of the other agents with which an agent interacts.This is relevant to our objective of studying the impact thatindividuals have in team setting.

Firstly, the interaction network has to be modeled asa graph from the workflows. A distinct feature of ourproblem is the presence of outcomes for the workflows,which have to be taken into consideration. Given that thebinary outcomesS andF are diametrically opposite in their

values, there are two natural approaches: (1) to construct twodifferent interaction networks, one for each outcome, and(2) to construct a signed value graph with positive edges todenote interactions ofS workflows and negative edges forFworkflows. First, we shall adopt both these approaches anddiscuss their limitations. Then we propose a novel approachthat can also model workflows with finite discrete outcomes,rather than justS andF .

Let ∆ = [δij ] be theN × N interaction matrix thatcaptures the strength of interactions between theN agents.If (i, j) is the edge betweeni and j in the interactionnetwork, thenδij is the weight on that edge. The∆ hasto be constructed from the workflows and let us ignore theoutcomes for brevity. With a slight abuse of notation, letδt

ij

denote thestrengthof interaction between actorsi andj inworkflow t. Following are some of the desirable propertiesof δt

ij .

1. [No influence] If i /∈ Vt andj /∈ Vt, thenδtij = 0.

2. [Intra-workflow influence] δtij should be function of

the path length betweeni andj in Gt.

3. [Asymmetry] Based on the relative influence of theroles ofi andj, δt

ij 6= δtji.

4. [Past influence] The overall interactionδij can bederived as:

δij =∑

t

µtδtij , µt ≤ µt+1

The first property is a trivial one that emphasizes the im-portance of participation in the same workflow as the neces-sity to qualify the linkage as interaction. Theintra-workflowinfluenceallows to differentiate intra-workflow and inter-workflow interactions. Let̀t and t̃ be two different work-flows. Let

(i, j), (j, k) ∈ Et̀ (i, k) /∈ Et̀(3.1)

(j, l) ∈ Et̃ i /∈ Vt̃(3.2)

The traditional assignment of weights in social networksemphasizes only on direct interactions:

δt̀ij = δt̀

jk = 1; δt̀ik = 0;(3.3)

δt̃jl = 1; δt̃

il = 0;(3.4)

Intuitively, δt̀ik > δt̃

il, asi andk have worked on the sameproject, whereasi and l have not. Thus the intra-workflowinfluence property is desirable if we have to differentiateabove scenarios. The asymmetry property is useful if theresponsibilities ofi and j are hierarchically structured. Ifi is above j, then influence fromi to j will be morethan that of the reverse. The last property is about how

381 Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

Page 6: 14 Current Cardiology Reviews, 2012, 8, 14-25 On the Analysis of

Figure 2: Interaction network constructed from workflowswith outcomeS

we obtain the overall strength of interaction based on theindividual workflow interactions. The past influence factorµt can be used to model the relative importance of theinfluence with respect to time. One can easily see that byjudiciously choosingµt, we can arrive at various kinds ofpast influences:Uniform, sliding window, etc.

We will now explore the approach of modeling theoutcomes using two different interaction networks. Socialnetworks allow multiple relations to be defined on the sameset of agents. Hence, one can define linkages related toS andthat of related toF workflows. For brevity, we consider themas two different networks with interaction matrices∆S and∆F . We will use the example in Figure 1 to illustrate thisapproach. The∆S is constructed only fromS workflowsand we assume symmetryδt

ij = δtji. The intra-workflow

influence is modeled as follows:

(3.5) δtij =

1

P tij

The P tij is the shortest path length betweeni and j in

workflow t, where each edge has unit weight. As theexample has few workflows, the past influence isuniform:µt = 1.

Figure 2 shows the interaction network of agents mod-eled from the workflows withS outcomes in Figure 1. Letxj denote the eigenvector centrality of agentj and∆S de-note the interaction matrix. The eigenvector centrality [2] isgiven by:

(3.6) ∆Sx = λx

Theλ is the largest eigenvalue of∆S andx is its correspond-ing eigenvector. In the linear summation form, the abovereduces to:

(3.7) xj =1

λ

i

δSijxj , ∀j

Eigenvector centrality measures the centrality of an agentas a linear combination of centralities of agents to which itis connected. Unlike degree, which weighs every adjacentagent equally, the eigenvector weights adjacent agents ac-cording to their centralities. The agents can then be rankedbased on the component wise value of the eigenvector: Ifxj1 ≥ xj2 ≥ xj3 . . . , then the ranking for the agents arej1, j2, j3, . . . . The idea of using the eigenvector to do rank-ing dates back to the 1950’s [24, 11], which was also laterused to rank pages in the web.

One can similarly construct an interaction graph forworkflows withF outcomes. The rankings for the interac-tion matrices∆S and∆F are:

∆S : A2, A3, A1, A5, A7, A4, A6

∆F : A5, A1, A6, A2, A7, A4, A3

The above rankings are of little help as one cannot be usedignoring the other and both together cannot yield consistentranking. For example,A1 is preferable toA5, as A1 isranked higher thanA5 in S ranking and vice versa inF rank-ing. However, the preference betweenA1 andA4 cannot bededuced. Hence, even at the level of pairwise comparisons,it is not possible to derive preferential rankings.

The second approach is to model the interaction networkas a signed graph with both positive and negative links. Thesigned interaction matrix is given by:

(3.8) ∆S − ∆F

For handling relations with negative status, eigen vectorbased centrality measure was proposed in [4]. However,it is only applicable to matrices withbalanced structure.If positive edges denote friendship and the negative onesdenote enemity, then the balance structure exists if friendof friend is a friend, enemy of a friend is an enemy, friendof an enemy is an enemy, and enemy of an enemy is afriend. For the signed graphs with balanced structures, thevertices can be partitioned into two subgraphs such thatthe links within each subgraph is positive and the linksbetween the two graphs is negative. One can extend beyondbalanced structures and consider clusters, with more thantwo subgraphs. For our problem, clusters and balancestructures are not applicable as the pairwise interactionsbetween any two agents cannot be claimed to be positive ornegative based on the outcomes. In the example in Figure 1,A2 andA6 have directly interacted in one workflow withSoutcome and one withF outcome. Hence, one also directlycannot use techniques for ranking webpages with negativelinks.

The problem with the above modeling approaches is theimplicit assumption thatS andF are diametrically oppositeoutcomes. In the following, we assume them to be outcomeswith differentutilities, including negative. Then the pairwise

382 Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

Page 7: 14 Current Cardiology Reviews, 2012, 8, 14-25 On the Analysis of

interactions cannot be negative by themselves, even if it hasresulted in anF outcome. We propose a novel technique,where the outcomes{S, F} are included as agents in adirected network. The main rationale for the specific directedgraph construction that we use is the fact that the outcomeshave certain fixed utilities and we would like to rank theagents based on the utilities of the outcomes; that is, thereis inherent asymmetry between the normal agents and thespecial agents corresponding toS and F . Therefore, wecreate directed edges that allow the utilities (or the influence)of the outcomes to be transmitted to the agents, but disallowany transmission in the reverse direction. Details of theconstruction are as below:

δtS,j =

{

1 if Rt = S0 otherwise

(3.9)

δtF,j =

{

1 if Rt = F0 otherwise

(3.10)

The interaction between the outcomes and the agents aredirected:

(3.11) δtj,Rt

= 0, ∀j, ∀t, ∀Rt

The interaction matrix∆ is of order(N + 2)× (N + 2)that is directed and hence not symmetric:

δij = δij(S) + δij(F ), ∀i, j ∈ V(3.12)

δRt,j =∑

t

δtRt,j , ∀j, ∀Rt(3.13)

δj,Rt=

t

δtj,Rt

, ∀j, ∀Rt(3.14)

The directed interaction network corresponding to theexample in Figure 1 is shown in Figure 3.

One of the problems with the directed constructionis that the eigenvalue centrality cannot be directly usedfor directed graphs. However, we leverage a result byBonacich [3] in which it is shown that a centrality measurecalled alpha-centrality (similar in spirit to the eigenvectorcentrality) can be computed for directed graphs. The maincomputational task of thealpha-centrality is given by thefollowing:

(3.15) x = (I − α∆T )−1e

The vectore is exogenous source of information and parame-terα reflects the relative importance of endogenous (∆) ver-sus exogenous (e) factors in the determination of centrality.The centralityx is shown to exist for0 ≤ x < 1/λ, whereλ is the largest eigenvalue of∆. Thee is an external sourceof status that is independent of the interactions. Letyij de-note the entries in the inverse matrix(I−α∆T )−1. Then the

Figure 3: Interaction network withS andF as agents

centrality scores are given by:

(3.16) xj =∑

i

yjiei

Note that by constructionyRt,i = 0, yii = 1, ∀i and∀Rt. Hence, the centralities of theoutcomesare unaffected:xRt

= eRt. If the vectore denotes the judiciously chosen

utilities, then the centrality score essentially captures boththe interactions among agents and outcomes, and the utilitiesof the outcomes. Following is one possible assignment toe:

ej = 1, ∀j ∈ V(3.17)

eS = −eF(3.18)

All the agents have equal status and theS and F haveopposite status with some high value other that 1. Table1 shows the rankings obtained foreS = 10, eF = −10,and for various values ofα. The largest eigenvalue of∆is λ = 6.222. Observe that in a suitable range ofα, therankings seem to show consistency. As far as we know, thereis no study on choosing the appropriateα ande that givesrise to consistent rankings. We are currently exploring waysto automatically compute the range forα and the vectorethat give rise to consistent rankings.

The proposed technique of modelingS andF as agentscan easily be extended to finite discrete outcomes with differ-ent utilities. This section dealt with ranking of agents basedon historical interactions and outcomes. In the next section,we develop an iterative algorithm that aims to discover howthe agents contribute to the outcomes of their workflows.

383 Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

Page 8: 14 Current Cardiology Reviews, 2012, 8, 14-25 On the Analysis of

α Ranking0.01 A2, A3, A1, A4, A6, A7, A50.02 A2, A3, A1, A6, A7, A4, A50.05 A2, A3, A1, A6, A7, A4, A50.08 A2, A3, A1, A6, A5, A7, A40.10 A2, A1, A3, A6, A5, A7, A40.12 A2, A1, A3, A5, A6, A7, A40.15 A2, A1, A5, A3, A6, A7, A4ej = 1, ∀j; eS = 10; eF = −10; λ = 6.222;

Table 1: Ranking of agents usingα-centrality measure

4 Iterative updates approach

In this section, we present an iterative approach to inferuseful information regarding the impact of agents on theirworkflows.

One of the ways of measuring the impact of an agent isto assign a weight in the range of[0, 1] that is indicative of theagent’s contribution towards success/failure of workflows.Let wa, ∀a ∈ V be the assignment of weights to the agents.Given these weight assignments and a specific workflow

W , let wavg =P

a∈Wwa

|a∈W | be the average weight of theagents belonging to the workflow. One way to explainthe outcome of the workflow is to compare the averageweight to certain thresholds associated with the outcomes.For instance, letSt andFt be two thresholds in the range[0, 1] corresponding to successful and failure workflowsrespectively. The assignment of weights is said to explainthe outcome of a successful workflowW if wavg > St.Similarly, for a failed workflow, we requirewavg < Ft.

For simplicity, let us assume that the outcomesRtsare either0 (failure) or 1 (success). Our approach extendsdirectly to the continuous case. Aggregate based method ofassigning weights would average the outcomes of the agent’s

workflows. For an agenta, wa =P

t:a∈WtRt

|{t:a∈Wt}|. Let f be the

fraction of workflows that are explained by the aggregationbased approach. Our goal is to significantly improve thefraction of explained workflows in comparison tof .

Our idea is very simple. We start with any validassignment of weights to agents. For each workflow that isnot explained by the current assignment, update the weightsof the agents belonging to the workflow in small quantitiesin such a way that the gap between the threshold and itsaverage decreases. Specifically, we iteratively improve theassignment by considering all unexplained workflows ineach iteration. For each workflow that is not explained, weupdate the weights of all the agents involved in its executionby a very small quantityǫ. If a workflow’s outcome is 1 andit is not explained, we increment the weight of each of itsagents byǫ. Similarly, if the workflow’s outcome is 0 andit is unexplained, then, we reduce the weights of each of itsagents by the same quantityǫ. While updating the weights

in this fashion, we restrict them to the[0, 1] range. At alltime, we maintain the best assignment so far. The procedureis terminated when the fraction of the explained workflowsis above a thresholdF (say 0.95) or if the fraction of theexplained workflows by the assignments in the lastL roundsdoes not increase by a minimum threshold,M . As we canalways begin with aggregate weights,

PROPOSITION4.1. The assignment returned by the itera-tive approach is at least as good as the aggregate basedmethod.

Our algorithm is also very fast. It is easy to verify thefollowing:

PROPOSITION4.2. The running time of the iterative ap-proach isO( L

M ·|I|) where|I| represents the size of the input.

Our approach is easily extended to infer the impact ofthe interactions as follows. Each direct interaction or a linkis modeled as alink agentwith a weight associated with it.The algorithm just presented can now be run on the set ofindividual agents and link agents with the outcomes of theworkflows as the input. In fact, when the workflow is aprecedence relationship, it does not even change the runningtime. This approach can be extended to larger subsets ina limited way. For subsets of constant size, sayk, we canconsiderhyper-edge agentsof sizek. In this case of course,the running time would also depend onNk.

100

200

S

200

B

Workflow Compositions

200

200A

200

S

BWorkflows with Outcomes

200A Success:

Failure:

200

600

200

600

F F

100

Figure 4: An example that shows the advantage of iterativeapproach.

To get an understanding of how our approach helpsus get better explanations, consider the example shown inFigure 4. In this example, there are four agents, namely,A, B, F, S. The LHS of the figure shows how a set of1500 workflows are assigned as edges between two agents.The numbers on the edges indicate the number of tasksassigned to the edge. For example,100 tasks are executedby an interaction betweenA and B. The RHS of thefigure shows the outcomes of the workflow executions. Thesuccessful workflows are highlighted by solid lines and theunsuccessful ones by dotted lines. AgentS is the mosteffective agent who is able to ensure success of all theworkflows he is assigned. A naive aggregation approach

384 Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

Page 9: 14 Current Cardiology Reviews, 2012, 8, 14-25 On the Analysis of

would assign effectiveness values of1.0, 0.6, 0.6, and0.6to S, F, A, B respectively. If we assume that an averageabove0.5 is required to explain success and an averagebelow0.5 is required to explain failure, then, these values areable to explain only1100 workflows. The failed workflowsbetweenF, A and F, B are not explainable. However, asingle iteration of our refinement algorithm withǫ = 0.0004computes the effectiveness of different agents as:eS =1, eF = 0.34, eA = 0.52, eB = 0.52. It is easy to checkthat these values explain all the workflows, including thefailed ones. Thus, our heuristic is designed to capture thisprocess of readjustment by focusing on workflows that needexplanation. We present the details of our experimentationwith this approach in the next section.

Discussion. Note that the weights ofA andB in theabove example is not a good reflection of their abilities.However, when these weights are used in future assignments,their true ability will be reflected. Readers familiar with themultiplicative weights update methods [12, 1] would imme-diately notice that our updates are additive. The reason isthat our notion of explanation is based on averages. For in-stance, in the above example, penalizing unexplained failedworkflows by a multiplicative update of(1 − δ) would drivethe weights ofA, B, F to very small values, thus worseningthe solution. Perhaps, one could change the notion of expla-nations to suit the multiplicative updates as well. Here, werestrict ourselves to average based explanations and henceadditive updates.

5 Computational Results

As mentioned in the introduction, it is difficult to get real-life data on the effectiveness of the workflow executionsin service delivery. At the same time, modeling agentsand interactions by standard mathematical models is notuseful either as they are not representative of real-life data.Therefore, it is essential to generate data that is as close tothe real-life data as possible. In context of service deliverywhere there is a major fraction of human element involved,the task of generating realistic data is a challenging researchtopic in itself. Techniques for such a simulation frameworkis not the main focus of this paper. However, developinga simulation framework and techniques to generate realisticdata is one of our long-term goals. Here, we present thecurrent implementation of our simulation platform and howwe use it for our experimental purposes. Our frameworkis flexible and easily extensible so that experts with domainknowledge can quickly transform their experience to usablemodels that can be used by the simulator.

5.1 Simulation Platform The goal of the simulation plat-form is to build a simulation environment consisting of i)a set of agents each of whom is capable of performing aspecified set of activities,ii) to generate realistic workflows

requiring different functionalities in the form of a chain (tobegin with), iii) to push the workflows through the networkof agents in which the agent behavior is close to real-life,and (iv) to measure the outcome of these workflow execu-tions. In the order of their simplicity in implementation, thisinvolves three aspects (i) mechanism to create the network ofagents and the set of workflows (ii) modeling the behaviorsof the agents in the face of a sequence of workflow assign-ments (iii) ways in which the workflows are assigned to theagents during the simulation. We present the details of ourplatform in this sequence, simple to complex.

The simplest part of the simulation framework is to in-stantiate it by creating the agents and setting the capacities ofthe agents. In our formulation, we mentioned that the work-flows can be general subgraphs on the set of agents. How-ever, currently in our framework, we represent workflows asprecedence driven chains. Each stage of the chain is speci-fied by a functionality required at that stage. Each agent isendowed with a capacity and a set of functionalities that itcan perform. So, the assignment of workflows to agents hasto satisfy the following constraints: (i) each stage is assignedto an agent who can perform the required functionality and(ii) the capacity constraints of the agents and the links as-signed to the workflow are not violated. There are also ca-pacities on the link between two agents that capture the op-erational constraints of interaction between the two agents.Our simulation framework allows simple text based initial-ization of these fields from domain experts. It can also beinitialized based on certain well observed phenomenons likepower law etc. When a workflow execution is completed, thecorresponding agents recover the capacities reserved for it.All along, we associate a weightqa with each agenta ∈ Vwhich is independent of the weights assigned by the algo-rithm in Section 4. These weights are used in the simulation.

Modeling the behavior of the agents is a very key as-pect of the simulation. Clearly, it is unreasonable to capturethe human element by simplistic mathematical models. Var-ious parameters, such as external factors, personal traits ofagents etc. can affect their behavior and are difficult to cap-ture mathematically. Two different effects of agent behavioron the service delivery should be captured as well. Firstly,even competent agents can feel the monotonicity of simi-lar jobs and might slip into complacency, especially whenperforming similar tasks on a repeated basis. Secondly, theagents also have a capability to learn on the job thus makingthem decidedly different from machines.

We model the complex patterns of agent behavior bystarting with simple models and modifying such predictablebehavior with unpredictable and dynamic aspects. Thesimple form of an agent’s behavior is a normal distributionN(p, v) where 0 ≤ p ≤ 1 is the mean of the weightsit contributes to its workflows,v is the variance of thecontribution. Associated with each agent is a parameter

385 Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

Page 10: 14 Current Cardiology Reviews, 2012, 8, 14-25 On the Analysis of

d, 0 ≤ d ≤ 1 that indicates the probability that the agent’sperformance deviates from expected behavior and takes arandom form. This captures the unpredictability aspect ofan agent.

We extend this simple framework to incorporate differ-ent aspects of agent behavior as follows. Firstly, we treatthe succession of similar jobs done by an agent as a randomwalk with success probability corresponding to its standardprofile. Now, the complacency aspect is introduced by ask-ing the question, “what is the likelihood of a long walk ofonly successes in such a random walk?” Based on this prob-ability, we introduce complacency at a given stage for a givenagent. The learning aspect is handled as follows. Let us con-sider a workflow that is assigned to a subset of agentsA. Letthe average weight of the agents as maintained by the simu-lation engine is aboveSt. We treat the workflow as a success.Even though the workflow was a success, some of the agentsmay have a performance model which indicates an expectedresponse belowSt. We take this as an instance of the agentlearning on the job. Once an agent gets ticked a certain num-ber of times, we update his performance model to capturethis learning. Lastly, to simulate scenarios such as the oneillustrated in Section 4, we also add special agents charac-terized assuper-agentsandfailure-proneagents. The failureprone agents are superseded by the presence of super-agents.

The trickiest part of the simulation is the assignmentof the workflows to the agents. The assignments affect theagent behavior in a non-trivial manner as described above.In our current implementation, we have multiple heuristicsfor assigning workflows to the agents and we generate datacorresponding to each of the options. One of the ways ofassigning workflows is to “load balance” the agents in termsof their capacities. Second is the greedy way: choose thatset of agents which has the required capacity and has bestexpected weight of success. Note that this aspect affects thecomplacency of the agents and we do not have any meansof controlling it as of now. The third option is to makerandom assignment at each stage of the workflows: amongall the agents with surplus capacity and who can perform therequired functionality, choose one randomly.

5.2 Experimental Evaluation We used the simulationframework to carry out our experiments. We generated in-stances of service interaction networks by first creating a net-work of agents and then pushing a certain number of work-flows through them. The outcome of the workflows was de-cided by the average of the weights of the assigned agents(qas maintained by the simulation framework). The iterativeanalysis presented in Section 4 was run on the resulting ser-vice interaction networks. The efficacy of our approach wasmeasured based on the how it could improve the fraction ofthe workflows that were explained by the final weight as-signment as opposed to the fraction explained by the simple

aggregation method.Let T be the number of workflows andN be the number

of actors in the service interaction network. We found thatthe improvement achieved by the iterative technique is notuniform for all combinations ofT andN . Our experimentssuggest that in a particular range of the ratio of number oftasks to the number of actors,T/N , the iterative algorithmseems to do particularly well. We also studied the effectof the presence of super-agents and failure-prone agents onthe performance of the iterative technique. We generatedinstance with varying number of workflows and agents, withand without the presence of special agents. For assigning theworkflows to the agents, we followed all the three policiesmentioned in the Section 5.1. In our set-up, we usedSt =Ft = 0.5, ǫ = 0.005 and different values of the thresholdF ,the fraction of workflows that need to be explained for thealgorithm to terminate. A summary of the our findings is:

• The performance of the iterative approach is not af-fected significantly by the presence of special agents.

• The performance of the iterative approach is not af-fected significantly by the choice of the heuristic forassigning workflows to the agents. The improvementis slightly less in the case of greedy assignment.

• In general we are able to improve the fraction of theexplained workflows in the range of0.20. As expected,we do special agents when they are present.

As mentioned above, the improvement achieved by theiterative method is not uniform over differentT/N ratios.One of the patterns we observed is that when the ratiois close to1, the improvement is not significant as theaverage based approach performs quite well. As the ratioincreases, it improves upto a certain threshold. When theratio is very high, sayO(N), i.e., number of workflows isO(N2), the improvement is not significant. This is due tothe fact that the complexities introduced by the model makeit very difficult to explain the outcomes by associating asingle weight to the agents. The table in Figure 5 showsthe improvement while keeping the number of actors fixed.Figure 6 shows an approximate plot of how theT/N ratioaffects the performance. Note that the horizontal axis is notto scale. The purpose of the figure is to capture the trend atthe two extreme sides of the input sizes.

Our experiments with different strategies for assigningthe workflows to agents did not affect the performance ofthe approach in any significant way. It only lowered the im-provement to the range of0.15. But, the patterns of improve-ment over differentT/N ratios remained same. Therefore,we present our experimental numbers across all our experi-ments without making a distinction about the workflow as-signment method. The table in Figure 7 shows the perfor-mance of the two approaches for different combinations of

386 Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

Page 11: 14 Current Cardiology Reviews, 2012, 8, 14-25 On the Analysis of

#Agents #Workflows % of Ag. Apr. % of Iter. Apr.250 500 74 90250 2000 64 81250 10000 57 77250 60000 62 78250 80000 46 61

125 500 67 82125 1500 62 81125 5000 63 79125 10000 66 78125 30000 44 52

Figure 5: Comparison of the two approaches asT is in-creased while keepingN constant

Aggregate Approach

T/N ratio

fra

ctio

nof

expl

ain

ed

wor

kflow

s

1 O(N)

1.0

0.5

Iterative Approach

Figure 6: An approximate plot of how the algorithm per-forms asT/N ratio is varied.

T and N . For each combination, we generated many in-stances and the table captures the average behavior. Themean and the variance of the improvement obtained by theiterative method over all the experiments were0.19 and0.05respectively.

6 Discussion and Future Work

Analyzing service interaction networks is one of the impor-tant aspects of the emerging service engineering discipline.In this section, we highlight some of the interesting researchproblems in this area. We begin by first highlighting some ofthe interesting extensions to the work presented here.

One of the reasons for the dip in the performance ofthe algorithm as the workflows increase is our attempt tocapture the entire dynamics by a single number. One ofthe approaches to deal with this problem is to divide theworkflows temporally into multiple epochs and to maintaina separate weight for each epoch. This would improvethe performance of both aggregate based method and theiterative method. Identifying the epochs automatically seemsto be an interesting way to extend our work.

In our current implementation, the workflows are prece-dence chains. This makes it easier for the iterative approach

#Agents #Workflows % of Ag. Apr. % of Iter. Apr.6 10 77 9010 10 73 91100 100 69 84125 1000 62 81125 5000 63 79250 500 71 92250 5000 56 79250 100000 38 50

Figure 7: Comparison of the two approaches for differentchoices ofT andN .

to include link agents and get similar performance benefits.It is not difficult to extend the simulation framework to gen-erate workflows that are subgraphs on the set of agents. But,capturing the impact of interaction between subsets of agentsnow becomes tricky. The iterative approach does not extendnaturally. We are exploring algorithmic approaches in thisgeneric scenario.

The eigen vector based approaches have had success inSNA and in related applications like page ranking. A recentresearch paper by Kerchove and Dooren [7] has extended thisapproach when negative links are present in the network. Weare exploring the eigen vector based approach further in ourcurrent work.

Let us now highlight other interesting problems in ana-lyzing service interaction networks. Consider the service in-teraction network that emerges in the example of Li&Fung.As the agents in this case are ISVs, they do not possessthe knowledge of the global network. However, as differentworkflows are passed through them and they interact with theother ISVs, they start discovering parts of the network. Whena large number of workflows have been pushed through thesystem, an interesting question that emerges is:is there asmall set of agents who, if they combine their local knowl-edge of the network, can reconstruct a significant portionof the global network?This seems to be an important as-pect of risk analysis in moderately sized networks. Even inintra-organizational networks, such information can be usedto promote the identified agents to achieve better overall co-ordination. Modeling the problem formally and developingalgorithmic approaches to this problem is a promising re-search direction.

As highlighted in the introduction, access to real-lifedata is a major hurdle for studying service interaction net-works, especially from the point of view of public dissemi-nation. Therefore, realistic data generation using novel sim-ulation techniques and exploiting domain knowledge of ex-perts is essential for studying service interaction network.The simulation framework presented here is only a first stepin this direction. We are currently focusing on exploiting acombination of innovative modeling and integration of do-

387 Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

Page 12: 14 Current Cardiology Reviews, 2012, 8, 14-25 On the Analysis of

main expertise to make our simulation framework richer andadaptable to wider range of services scenarios

References

[1] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplica-tive weights update method: A meta algorithm and its appli-cations. Technical report, The Princeton University.

[2] Philip Bonacich. Factoring and weighting approaches toclique identification. Journal of Mathematical Sociology,2:113–120, 1972.

[3] Philip Bonacich and Paulette Lloyd. Eigenvector-like mea-sures of centrality for asymmetric relations.Social Networks,23:191–201, 2001.

[4] Philip Bonacich and Paulette Lloyd. Calculating status withnegative relations.Social Networks, 26:331–338, 2004.

[5] Andrei Z. Broder, Ravi Kumar, Farzin Maghoul, Prab-hakar Raghavan, Sridhar Rajagopalan, Raymie Stata, AndrewTomkins, and Janet L. Wiener. Graph structure in the web. InProceedings of the nineth international conference on WorldWide Web (WWW), pages 309–320, 2000.

[6] John Seely Brown, Scott Durchslag, and John Hagel. Loos-ening up: How process networks unlock the power of special-ization. InThe McKinsey Quarterly: Special edition on riskand resilience. 2002.

[7] Cristobald de Kerchove and Paul Van Dooren. The pagetrustalgorithm: How to rank web pages when negative links areallowed? InSDM, pages 346–352, 2008.

[8] Brenda Dietrich and Terry Harrison. Serving the servicesindustry. Operations Research and Management Science(OR/MS), 33(3), 2006.

[9] Linton C. Freeman. Centrality in social networks: Concep-tual clarification.Social Networks, 1:215–239, 1979.

[10] John Hagel, Scott Durchslag, and John Seely Brown. Or-chestrating loosely coupled business processes: The secretto successful collaboration. Copyright by John HagelIII, John Seely Brown, and Scott Durchslag; available athttp://www.johnhagel.com/orchestratingcollaboration.pdf,2002.

[11] M. G. Kendall. Further contributions to the theory of pairedcomparisons.Biometrics, 11:43, 1995.

[12] Nick Littlestone and Manfred K. Warmuth. Theweighted majority algorithm.Information and Computation,108(2):212–261, 1994.

[13] Stanley Milgram. The small world problem.PsychologyToday, 2:60–67, 1967.

[14] Amit A. Nanavati, Siva Gurumurthy, Gautam Das, DipanjanChakraborty, Koustuv Dasgupta, Souagata Mukherjea, andAnupam Joshi. On the structural properties of massive tele-com call graphs: findings and implications. InProceedings ofthe 2006 ACM International Conference on Information andKnowledge Management (CIKM), pages 435–444, 2006.

[15] Mark Newman. Scientific collaboration networks: I. Networkconstruction and fundamental results.Physical Review, 64,2001.

[16] Mark Newman. Scientific collaboration networks: II. Short-est paths, weighted networks, and centrality.Physical Re-view, 64, 2001.

[17] Mark Newman. The structure and function of complexnetworks.SIAM Review, pages 167–256, 2003.

[18] Andrea Ordanini and Kenneth L. Kraemer. Medion: the retailorchestrator in the computer industry. 2006.

[19] Vinayaka Pandit, Natwar Modani, Sougata Mukherjea,Amit A. Nanavati, Sambuddha Roy, and Amit Agarwal. Ex-tracting dense communities from telecom call graphs. InProceedings of the Third International Conference on COM-munication System softWAre and MiddlewaRE (COMSWARE2008), pages 82–89, 2008.

[20] Bikram Sengupta, Satish Chandra, and Vibha Sinha. A re-search agenda for distributed software development. In28thInternational Conference on Software Engineering (ICSE),pages 731–740, 2006.

[21] Vibha Sinha, Bikram Sengupta, and Satish Chandra. En-abling collaboration in distributed requirements management.IEEE Software, 23(5):52–61, 2006.

[22] Giuseppe Valetto, Mary Helander, Kate Ehrlich, Sunita Chu-lani, Mark N. Wegman, and Clay Williams. Using softwarerepositories to investigate socio-technical congruence in de-velopment projects. InFourth International Workshop onMining Software Repositories (MSR), page 25, 2007.

[23] Stanley Wasserman and Katherine Faust.Social NetworkAnalysis: Methods and Applications. Cambridge UniversityPress, New York, 1994.

[24] T. H. Wei. The algebraic foundations of ranking theory.Cambridge University Press, London, 1952.

388 Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.