40
Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of T Supervisor: Eric Yu

Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

Embed Size (px)

Citation preview

Page 1: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

Evaluation and Scalability of Goal Models

URN Meeting

Ottawa, January 16-18, 2008Jennifer Horkoff

PhD Candidate, Department of Computer Science, U of T

Supervisor: Eric Yu

Page 2: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

2

Introduction• 2nd Year Ph.D. student in the Department of

Computer Science.

• Research Interests: Requirements Modeling, Intentional (Goal) Modeling, Intentional Model Analysis/Evaluation, Model Scalability.

• PhD Topic (in progress): Analysis of i* models.

• This presentation will briefly outline current and past research topics potentially relevant to the definition of GRL and URN

Page 3: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

3

Outline

• Background• i* Analysis

– jUCMNav Evaluation– Interactive Forward Analysis of i* Models– Interactive Backward Analysis of i* Models– Using Satisfaction Arguments in i* Analysis

• Scalability– i* Scalability– Reusable i* Technology “Patterns”

• Conclusions

Page 4: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

4

Background• URN: User Requirements Notation

– Connecting business processes with requirements concepts

– Combines two notations:• Use Case Maps (UCM)• Goal-Oriented Requirement Language (GRL)

– GRL is based on the i* and NFR Frameworks– Agent-Oriented, Intentional modeling framework

(“why?” as well as “what?” and “how?”)– Captures stakeholders, their goals, and how these

goals are achieved, including dependencies amongst stakeholders.

Page 5: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

5

i* Syntax and ExampleActor

Boundary

D

Help

Means-Ends

Decomposition

Contribution

Dependency

Make

Some +

Unknown

Hurt

Some -

Break

Sell PC Products for

Profit

Profit

PC Users Abide by Licensing Regulations

Produce PC

Products

PC Product Provider

Allow Peer-to-Peer

Technology

Help

Hel

p

PC UserPC Products Be Obtained

Purchase PC Products

Obtain PC Products from

Data Pirate

Affordable PC Products

Abide By Licensing

Regulations

Mak

e

Break

Hur

t

Data Pirate

Make Content Available

Peer-to-Peer

Pirated PC Products

D

D

Allow Peer-to-Peer

Technology

D

D

PC Products

D

D

PC Users Abide by Licensing Regulations

DDHelp

He

lp

Desirable PC Products

ActorGoalSoftgoalTaskResource AgentRole Position

Page 6: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

6

Analysis/Evaluation of i* Models• The creation of i* models in and of itself a useful

activity:– explicitly capture reasoning process– aids in communication– helps discover requirements.

• Can make further use of models by evaluating them.• Purpose is to determine to what degrees

stakeholder goals will be satisfied or denied, given a particular situation or scenario.

• Claim: Evaluation is an important part of i* modeling, it allows users to determine whether goals are met and guides revision of the model and design, improving quality.

• Other modeling techniques (process, data modes) do not have this capability.

Page 7: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

7

jUCMNav Evaluation Overview• Quantitative links between business processes

in UCM and business goals in GRL.• Initial satisfaction levels in GRL evaluation

define strategies.• Key Performance Indicators (KPIs) are used as

part of Business Activity Monitoring to measure how well a process satisfies goals.– Four main dimensions of indicators: time, cost, quality

and flexibility

• A KPI model connects business process monitors to GRL goals using quantified contribution links.

Page 8: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

8

jUCMNav Evaluation Overview• KPI models are kept separate from GRL models,

individual KPI models are defined for different stakeholders.

• KPI values are mapped to evaluation levels:– Target Value, Threshold Value, Worst Value

• Figure From: A. Pourshahid, D. Amyot, P. Chen, M. Weiss, A. J. Forester, “Business Process Monitoring and Alignment: An Approach Based on the User Requirements Notation and Business Intelligence Tools”, IX Workshop on Requirements Engineering, WER’06, 2006.

Page 9: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

9

Analysis/Evaluation of i* Models• Our approach: Qualitative

Forward/Backward, Interactive i* Evaluation• Can evaluate models in two directions:

– Forwards (bottom-up)• Asking “What if…?” questions

– Backwards (top-down)• Asking “Is this possible?” “What is needed to

achieve… ?”

• Our approach to analysis is interactive and qualitative.

Page 10: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

10

Interactive Analysis of i* Models• Our approach to analysis is interactive.

– The procedure prompts the user for input at various points in the procedure.

– Because i*/GRL models attempt to capture the desires and interactions between stakeholders, they are inherently incomplete.

– Decisions must be supplemented by expert knowledge.

– We aim to make models which are “complete enough” to facilitate understanding and useful analysis.

Page 11: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

11

Qualitative Analysis of i* Models• Our approach to analysis is qualitative.

– At the early requirements stage (“Early-RE”) where i* models are used, concrete quantitative information is often not available, yet we want to be able to model, understand and analyze the domain.

– We use course-grained qualitative analysis values, based on evaluation from the NFR Framework.

– Our approach does not exclude expansion to quantitative measures, if available.

Page 12: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

12

Qualitative Analysis of i* Models

• Approximate mapping from jUCMNav Evaluation values to Qualitative values

Page 13: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

13

Forwards, Qualitative, Interactive Evaluation of i* Models

• Evaluation procedure for i* models (MSc Work) developed, expanded and adapted from the evaluation procedure in the NFR Framework.

• Qualitative evaluation labels are propagated throughout the graph using a combination of propagation rules and human judgment.

• Human judgment is needed to resolve situations where conflicting evidence arrives at a softgoal.

• All other situations can be resolved through automatic rules.

Page 14: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

14

Sell PC Products for

Profit

Profit

PC Users Abide by Licensing Regulations

Produce PC

Products

PC Product Provider

Allow Peer-to-Peer

Technology

HelpH

elp

PC UserPC Products Be Obtained

Purchase PC Products

Obtain PC Products from

Data Pirate

Affordable PC Products

Abide By Licensing

Regulations

Mak

e

Break

Hurt

Data Pirate

Make Content Available

Peer-to-Peer

Pirated PC Products

D

Allow Peer-to-Peer

Technology

D

D

PC Products

D

PC Users Abide by Licensing Regulations

DHelp

Hel

p

Desirable PC Products

Forward Evaluation of i* Models• Demonstrate the procedure through an example: Simple

Model depicting the Trusted Computing Domain• Step 1: Formulate question

– If the PC Product Provider decides to not Allow Peer-to-Peer Technology, what effect will this have on Sell PC Products for Profit?

• Step 2: Place Initial Labels reflecting Question

Page 15: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

15

Sell PC Products for

Profit

Profit

PC Users Abide by Licensing Regulations

Produce PC

Products

PC Product Provider

Allow Peer-to-Peer

Technology

Help

Hel

p

PC UserPC Products Be Obtained

Purchase PC Products

Obtain PC Products from

Data Pirate

Affordable PC Products

Abide By Licensing

Regulations

Mak

e

Data Pirate

Make Content Available

Peer-to-Peer

Pirated PC Products

D

Allow Peer-to-Peer

Technology

D

PC Products

D

PC Users Abide by Licensing Regulations

DHelp

Hel

p

Desirable PC Products

Sell PC Products for

Profit

Profit

PC Users Abide by Licensing Regulations

Produce PC

Products

PC Product Provider

Allow Peer-to-Peer

Technology

Help

Hel

p

PC UserPC Products Be Obtained

Purchase PC Products

Obtain PC Products from

Data Pirate

Affordable PC Products

Abide By Licensing

Regulations

Mak

e

Data Pirate

Make Content Available

Peer-to-Peer

Pirated PC Products

D

Allow Peer-to-Peer

Technology

PC Products

PC Users Abide by Licensing Regulations

D

He

lp

Desirable PC Products

Sell PC Products for

Profit

Profit

PC Users Abide by Licensing Regulations

Produce PC

Products

PC Product Provider

Allow Peer-to-Peer

Technology

Help

Hel

p

PC UserPC Products Be Obtained

Purchase PC Products

Obtain PC Products from

Data Pirate

Affordable PC Products

Abide By Licensing

Regulations

Mak

e

Data Pirate

Make Content Available

Peer-to-Peer

Pirated PC Products

D

Allow Peer-to-Peer

Technology

D

PC Products

D

PC Users Abide by Licensing Regulations

DHelp

Hel

p

Desirable PC Products

Sell PC Products for

Profit

Profit

PC Users Abide by Licensing Regulations

Produce PC

Products

PC Product Provider

Allow Peer-to-Peer

Technology

Help

Hel

p

PC UserPC Products Be Obtained

Purchase PC Products

Obtain PC Products from

Data Pirate

Affordable PC Products

Abide By Licensing

Regulations

Mak

e

Data Pirate

Make Content Available

Peer-to-Peer

Pirated PC Products

D

Allow Peer-to-Peer

Technology

D

PC Products

D

PC Users Abide by Licensing Regulations

DHelp

He

lp

Desirable PC Products

Sell PC Products for

Profit

Profit

PC Users Abide by Licensing Regulations

Produce PC

Products

PC Product Provider

Allow Peer-to-Peer

Technology

Help

Hel

p

PC UserPC Products Be Obtained

Purchase PC Products

Obtain PC Products from

Data Pirate

Affordable PC Products

Abide By Licensing

Regulations

Mak

e

Data Pirate

Make Content Available

Peer-to-Peer

Pirated PC Products

D

Allow Peer-to-Peer

Technology

PC Products

PC Users Abide by Licensing Regulations

D

He

lp

Desirable PC Products

Sell PC Products for

Profit

Profit

PC Users Abide by Licensing Regulations

Produce PC

Products

PC Product Provider

Allow Peer-to-Peer

Technology

Help

Hel

p

PC UserPC Products Be Obtained

Purchase PC Products

Obtain PC Products from

Data Pirate

Affordable PC Products

Abide By Licensing

Regulations

Mak

e

Data Pirate

Make Content Available

Peer-to-Peer

Pirated PC Products

D

Allow Peer-to-Peer

Technology

D

PC Products

D

PC Users Abide by Licensing Regulations

DHelp

Hel

p

Desirable PC Products

Sell PC Products for

Profit

Profit

PC Users Abide by Licensing Regulations

Produce PC

Products

PC Product Provider

Allow Peer-to-Peer

Technology

Help

Hel

p

PC UserPC Products Be Obtained

Purchase PC Products

Obtain PC Products from

Data Pirate

Affordable PC Products

Abide By Licensing

Regulations

Mak

e

Data Pirate

Make Content Available

Peer-to-Peer

Pirated PC Products

D

Allow Peer-to-Peer

Technology

D

PC Products

D

PC Users Abide by Licensing Regulations

DHelp

Hel

p

Desirable PC Products

Sell PC Products for

Profit

Profit

PC Users Abide by Licensing Regulations

Produce PC

Products

PC Product Provider

Allow Peer-to-Peer

Technology

Help

Hel

p

PC UserPC Products Be Obtained

Purchase PC Products

Obtain PC Products from

Data Pirate

Affordable PC Products

Abide By Licensing

Regulations

Mak

e

Data Pirate

Make Content Available

Peer-to-Peer

Pirated PC Products

D

Allow Peer-to-Peer

Technology

D

PC Products

D

PC Users Abide by Licensing Regulations

DHelp

Hel

p

Desirable PC Products

Interactive Evaluation of i* Models• Step 3: Propagate labels• Step 4: Resolve labels• Iterate on step 3 and 4 until all labels have been propagated

Human Intervention

Affordable PC Products Receives the following Labels:

Partially denied from Obtain PC Products from Data Pirate

Partially Denied from Purchase PC Products

Select Label…

Select Denied

Human Intervention

Profit Receives the following Labels:

Partially denied from Desirable PC Products

Partially Satisficed from PC Users Abide by Licensing Regulations

Select Label…

Select Conflict

Page 16: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

16

Interactive Evaluation of i* Models• Step 5: Analyze result

– Preventing the use of peer-to-peer technology will reduce piracy, but will also make products less desirable to users

– The overall effect on profit for Sell PC Products for Profit is both positive and negative

• Step 6: repeat with new analysis question

Sell PC Products for

Profit

Profit

PC Users Abide by Licensing Regulations

Produce PC

Products

PC Product Provider

Allow Peer-to-Peer

Technology

Help

He

lp

PC UserPC Products Be Obtained

Purchase PC Products

Obtain PC Products from

Data Pirate

Affordable PC Products

Abide By Licensing

Regulations

Mak

e

Break

Hur

t

Data Pirate

Make Content Available

Peer-to-Peer

Pirated PC Products

D

Allow Peer-to-Peer

Technology

D

D

PC Products

D

PC Users Abide by Licensing Regulations

DHelp

He

lp

Desirable PC Products

Page 17: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

17

Backwards, Qualitative, Interactive Evaluation of i* Models

• Work in progress, initial version completed for an Automated Verification Course Project.

• Based on work with implements Backwards Analysis for goal models.

• Qualitative evaluation labels are propagated throughout the graph in a top-down manner, again using a combination of propagation rules and human judgment.

• An i* model and propagation rules are converted to axioms in CNF.

• The axioms are used with a SAT solver in an iteractive procedure.

Page 18: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

18

Application Attract Users

Implement Password System

Restrict Structure of Password

Ask for Secret

Question

Security Usability

Hel

p

Help

Hurt

Hel

p

Help Help

Backwards Evaluation of i* Models• Demonstrate the procedure through an example: Simple

Model depicting the Trusted Computing Domain• Step 1: Formulate question

– Is there an assignment of Target Labels such that Attract Users is Partially Satisficed?

– Target: Attract Users Partially Satisficed– Input Elements: Restrict Structure of Password, Ask for Secret

Question

Page 19: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

19

Application Attract Users

Implement Password System

Restrict Structure of Password

Ask for Secret

Question

Security Usability

Hel

p

Help

Hurt

Hel

p

Help Help

Backwards Evaluation of i* Models• Step 2: Iterative Procedure Begins

– Procedure runs, determines that there may be an assignment in which Attract Users is Partially Satisficed (depending on human judgment)

– Procedure prompts the user for human judgment on nodes with label conflicts, starting from “top” down.

Human Judgment

Attract Users must be Partially Satisficed. What combinations of the elements contributing to this element would produce this value?

Input: Security and Usability Partially Satisficed

Human Judgment

Security must be Partially Satisficed. What combinations of the elements contributing to this element would produce this value?

Input: Restrict Structure of Password and Ask for Secret Question Satisfied

Human Judgment

Usability must be Partially Satisficed. What combinations of the elements contributing to this element would produce this value?

Input: Restrict Structure of Password Denied and Ask for Secret Question Satisfied

Procedure Runs…

More elements needing human judgment are found

Procedure Runs…

Conflict! Satisfying assignment not found.

Go back to last round of human judgment

Page 20: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

20

Application Attract Users

Implement Password System

Restrict Structure of Password

Ask for Secret

Question

Security Usability

Hel

p

Help

Hurt

Hel

p

Help Help

Backwards Evaluation of i* Models• Step 2: Iterative Procedure Begins

– Procedure runs, determines that there may be an assignemt in which Attract Users is Partially Satisficed (depending on human judgment)

– Procedure prompts the user for human judgment on nodes with label conflicts, starting from “top” down.

Human Judgment

Security must be Partially Satisficed. What combinations of the elements contributing to this element would produce this value?

Previous: Restrict Structure of Password and Ask for Secret Question Satisfied

No New Input

Human Judgment

Usability must be Partially Satisficed. What combinations of the elements contributing to this element would produce this value?

Previous: Restrict Structure of Password Denied and Ask for Secret Question Satisfied

No New Input

Human Judgment

Attract Users must be Partially Satisficed. What combinations of the elements contributing to this element would produce this value?

Previous: Security and Usability Partially Satisficed

New Input: Security Satisfied and Usability has a Conflict Value

Human Judgment

Usability must have a conflict value. What combinations of the elements contributing to this element would produce this value?

Input: Restrict Structure of Password and Ask for Secret Question Satisfied

Page 21: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

21

Backwards Evaluation of i* Models• Step 3: Satisfying Assignment Found/Not Found

– When a satisying assignment that no longer requires human judgment is found, the procedure ends, reporting needed target values.

– If a satisfying assignment is not found, the procedure ends, reporting that the target is not possible.

Application Attract Users

Implement Password System

Restrict Structure of Password

Ask for Secret

Question

Security Usability

Hel

p

Help

Hurt

Hel

p

Help Help

Page 22: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

22

Analysis/Evaluation of i* Models• Claims:

– Evaluation increases the modeler’s knowledge of the domain.

– Evaluation leads modeler to make changes to model, improving the quality of the model.

• In thesis these claims are backed up for Forward Analysis by several examples.

• Forward Procedure was implemented in a depreciated version of the OpenOME tool, will be implemented again, along with Backward Procedure in the new version of OpenOME using EMF.

• Future work will include further applying both procedures to several case studies, evaluating their usefulness in real-life situations.

Page 23: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

23

Using Satisfaction Arguments in the Analysis/Evaluation of i* Models

• Based on work by N. Maiden’s Group in the U.K.– Maiden, Neil, Lockerbie, James, Randall, Debbie, Jones, Sean, Bush, David,

“Using Satisfaction Arguments to Enhance i* Modelling of an Air Traffic Management System”, 15th IEEE International Requirements Engineering

Conference, 2007. • M. Jackson’s notion of a satisfaction argument:

– D, S |- R– Properties of the Domain, along with the Specification, can

be used to show that one or more Requirements hold.• This has been incorporated into i* modeling by adding

structured, textual satisfaction arguments to justify satisfaction through mean-ends and contribution links.

• Motivated by an inability to capture the justifications behind model structure elicited during stakeholder workshops.

• Future Work: Incorporation Satisfaction (or “Evaluation”) Arguments into i* Evaluation – capturing justifications for relationships and human judgment.

Page 24: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

24

Analysis/Evaluation of i* Models:Other i* and Related Procedures

• There are several other procedure for i* analysis introduced by other research groups, for example:

• Work by X. Franch in Spain uses the structure of i* models as a means to measure desired properties such as security and predictability.– X.Franch, “On the Quantitative Analysis of Agent-Oriented Models:, CAiSE’06,

2006, pp. 495-509.

• Work with in Italy facilitates either qualitative or quantitative propagation through goal models. – P. Giorgini, J. Mylopoulos, E. Nicchiarelli, R. Sebastiani, “Reasoning with Goal

Models”, In Proceedings of 21st International Conference on Conceptual Modeling (ER 2002), pp. 167-181, 2002.

• Work in the U.K. by N. Maiden’s Group analyzes compliance of elements based on existing requirements and uses overall evaluation values for each actor– Maiden, Neil, Lockerbie, James, Randall, Debbie, Jones, Sean, Bush, David,

“Using Satisfaction Arguments to Enhance i* Modelling of an Air Traffic Management System”, 15th IEEE International Requirements Engineering Conference, 2007.

Page 25: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

25

Analysis/Evaluation of i* Models:Incorporating into GRL Definition?

• How do we account for the existence of i*/GRL evaluation and analysis in the URN Standard?

• Should we?• How do we leave our consideration of i*/GRL evaluation

and analysis open enough to facilitate various approaches at analysis?

Page 26: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

26

Model Scalability • i* Models can grow to unmanageable sizes• How do we deal with i* scalability?• How does our concern for scalability effect the URN

standard?

Example i* Model from Kids Help Phone Project, University Toronto, 2005

Page 27: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

27

Model Scalability • What are some of the mechanisms we can

employ to deal with Scalability?– Model subsets/Modular view?– Tabular views? – Queries? – Slicing? – Layers? – Patterns?– Etc.

• Effective tool support is needed• How do we account for or not exclude the

possibility of these techniques within the Standard?

Page 28: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

28

Reusable i* Technology “Patterns” • Work with Markus Strohmaier, Jorge Aranda, Steve

Easterbrook, Eric Yu

• Claim: Developing generalized models representing specific technology types (ex: wikis, discussion forums, chat rooms, etc), can help to aid the process of analyzing the appropriateness of technologies for specific contextual situations.

• What effects could pattern use have on scalability?

Page 29: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

29

Reusable i* Technology “Patterns” • General Methodology:

– Develop contextualized model

– Integrate pre-existing technology pattern into model, adapt pattern as necessary

– Evaluate effectiveness of technology in contextual situation (using i* evaluation method)– Repeat steps 2 and 3 for each promising technology– Come up with a solution

Internal Forums

Effective Use of Forums

Have a way to store information

permanently

Keep updated on things that

are missed

Help

Is a

Avoid forum duplication

Effective internal communications

Use forums

Communicate with other shifts

Hel

pH

elp

Hel

p

Forum posts

D

Help

Help others (Counselors/

Kids)

Hel

p

Facilitate Communication

Hel

p

Learn about Frequent Callers

Counseling

Doing good Counseling

Provide Internal Forums

Help

Effective Use of Forums

DEffective Use

of ForumsD

Help

Hel

p

Hel

p

Avoid forum duplicationD

Avoid forum duplication

D

Hurt

D

Help

Hur

t

Hel

p

Counsellor

VP Youth and Family Services

Clinical Supervisor

Supervisor

Forum posts

D

D

D

Page 30: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

30

Reusable i* Technology “Patterns”

• We have used this method in the Kids Help Phone study to test the viability of various technologies used for knowledge management.

• We have collected the data for an experiment testing the utility of technology patterns using students from a course.

• Additional Claims: – The resulting model is more detailed and of a higher quality than if

the pattern were not used.– The process of integrating the pattern is easier than developing

the model from scratch• Patterns do not help with scalability of the overall model, but

may help by modularizing modeling steps (creation, understanding, etc.)

• Submitted paper, REFSQ’08– “Can Patterns improve i* Modeling? Two Exploratory

Studies”

Page 31: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

31

Conclusions

• i*/GRL Analysis/Evaluation is an important capability of the modeling language.

• How do we account for this in the standard while still being open to different approaches?

• Scalability is an important issue in relation to i*/GRL usage.

• Should this issue be accounted for in GRL standardization? How?

Page 33: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

33

References• J. Horkofff, M.Sc. Thesis, Using i* Models for Evaluation, Department of Computer Science,

University of Toronto, (2006)• J. Horkoff, E. Yu, L. Liu. (2006) Analyzing Trust in Technology Strategies. International

Conference on Privacy, Security and Trust (PST 2006), Markham, Ontario, Canada. 12 pages. Accepted June 27, 2006.

• Chung, L., Nixon, B.A., Yu, E., Mylopoulos, J., Non-Functional Requirements in Software Engineering, Kluwer Academic Publishers, 2000.

• “OpenOME, an open-source requirements engineering tool”, Retrieved August 2007, from www.cs.toronto.edu/km/openome/

• Jackson M., 1995, ‘Software Requirements and Specifications’, Addison-Wesley.• Maiden, Neil, Lockerbie, James, Randall, Debbie, Jones, Sean, Bush, David, “Using

Satisfaction Arguments to Enhance i* Modelling of an Air Traffic Management System”, 15th IEEE International Requirements Engineering Conference, 2007.

• A. Pourshahid, D. Amyot, P. Chen, M. Weiss, A. J. Forester, “Business Process Monitoring and Alignment: An Approach Based on the User Requirements Notation and Business Intelligence Tools”, IX Workshop on Requirements Engineering, WER’06, 2006.

• X.Franch, “On the Quantitative Analysis of Agent-Oriented Models:, CAiSE’06, 2006, pp. 495-509.

• P. Giorgini, J. Mylopoulos, E. Nicchiarelli, R. Sebastiani, “Reasoning with Goal Models”, In Proceedings of 21st International Conference on Conceptual Modeling (ER 2002), pp. 167-181, 2002.

• J. Aranda, N. Ernst, J. Horkoff, S. Easterbrook. (2007) A Framework for Empirical Evaluation of Model Comprehensibility. Modeling in Software Engineering (MiSE) Workshop with ICSE 2007, Minneapolis, MN, May, 2007. 

• M. Strohmaier, E. Yu, J. Horkoff, J. Aranda, S. Easterbrook. (2007) Analyzing Knowledge Transfer Effectiveness - An Agent-Oriented Approach. 40th Hawaii International Conference on Systems Science (HICSS-40 2007), HI, USA.  

Page 34: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

34

Other Topics

• Modeling and Analyzing Technology Strategies

• Strategy Analysis Using Goal Models

• i* Modeling for Knowledge Management Analysis (KTA Method)

• Framework for Assessing the Comprehensibility of Models

Page 35: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

35

Modeling and Analyzing Technology Strategies

• As technology design becomes increasingly motivated by business strategy, technology users become wary of vendor intentions.

• Conversely, technology producers must discover strategies to gain the business of consumers.

• Both parties have a need to understand how business strategies shape technology design, and how such designs affect stakeholder goals.

• Claim: A goal-based methodology can be introduced which aids in the analysis of technology strategies, for both technology producers and consumers.

• Detailed Case Study: Trusted Computing• J. Horkoff, E. Yu, L. Liu. (2006)

Analyzing Trust in Technology Strategies. International Conference on Privacy, Security and Trust (PST 2006), Markham, Ontario, Canada. 12 pages. Accepted June 27, 2006.

• Working on a journal submission.

Page 36: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

36

Modeling and Analyzing Technology Strategies

• Example: Trusted Computing Case Study (example model)

Page 37: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

37

i* Modeling for Knowledge Management Analysis (KTA Method)

• Work with Markus Strohmaier (first author), Jorge Aranda, Eric Yu, Steve Easterbrook.

• Claim: The analysis powers offered by i* modeling would compliment the analysis of knowledge transfer studied within the field of knowledge management.

• Developed the Knowledge Transfer Agent (KTA) Method where the means of knowledge transfer are envisioned and modeled as intentional agents within an i* model.

• Can analyze the feasibility of different knowledge transfer mechanisms from a goal point of view before they are implemented:– Are stakeholder goals satisfied? – Is the knowledge transfer mechanism a success according to its

ascribed goals?

Knowlege XKnowledge X

D Knowlege X D

Role A Role B

Storage Object 1

Communication Channel 2

Storage Object

Communication Channel

Page 38: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

38

i* Modeling for Knowledge Management Analysis (KTA Method)

• Have been working with the Kids Help Phone organization for 2+ years on a strategic requirements analysis project.

• Used this setting to develop and test the KTA method.

• M. Strohmaier, E. Yu, J. Horkoff, J. Aranda, S. Easterbrook. (2007) Analyzing Knowledge Transfer Effectiveness - An Agent-Oriented Approach. 40th Hawaii International Conference on Systems Science (HICSS-40 2007), HI, USA.

• Currently working on a journal paper.

Page 39: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

39

Framework for Assessing the Comprehensibility of Models

• Work with Jorge Aranda, Neil Ernst, Steve Easterbrook• Original Intention: Design an experiment to “prove” that i* models

aid in comprehensibility.• Lead to an overview of related experiments and the discovery of

several issues which make such experiments difficult to design.• Revised Purpose: Propose a framework for evaluation the

comprehensibility of models in general. • Framework includes suggestions and guidelines involving:

– Measuring comprehensibility– Articulating modeling framework theories– Developing hypothesis– Using existing theories– Choosing domains, participants, etc.

• J. Aranda, N. Ernst, J. Horkoff, S. Easterbrook. (2007) A Framework for Empirical Evaluation of Model Comprehensibility. Modeling in Software Engineering (MiSE) Workshop with ICSE 2007, Minneapolis, MN, May, 2007.

• Future work: Apply and revise framework! (First candidate: i* models)

Page 40: Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of

40

Other Interests

• Model “Presentability” to stakeholders.

– How can stakeholders understand models?

– How much training is needed?

– Are they willing?

• Verification/Validation for conceptual models.

– How can such (potentially large) models be verified?

– By stakeholders?