114
Characterization of Model-based Software Testing Approaches Systematic Review This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 1 Characterization of Model-based Software Testing Approaches Arilo Claudio Dias Neto a Rajesh Subramanyan b Marlon Vieira b Guilherme Horta Travassos a PESC/COPPE/UFRJ a SIEMENS CORPORATE RESEARCH b Accomplished by September-December 2006 Published August-2007

Characterization of Model-based Software Testing · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Embed Size (px)

Citation preview

Page 1: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 1

Characterization of Model-based Software

Testing Approaches

Arilo Claudio Dias Netoa

Rajesh Subramanyanb

Marlon Vieirab

Guilherme Horta Travassosa

PESC/COPPE/UFRJa

SIEMENS CORPORATE RESEARCHb

Accomplished by September-December 2006

Published August-2007

Page 2: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 2

Summary

FIGURE REFERENCES.................................................................................................. 4

TABLE REFERENCES ................................................................................................... 5

1 INTRODUCTION................................................................................................... 6

2 SYSTEMATIC REVIEW PROTOCOL REGARDING MODEL-BASED TESTING 9

2.1 Goal of this Work................................................................................................... 9 2.2 Research Question ............................................................................................... 9

2.2.1 P0: What Model-based Software Testing Approaches have been described in the technical literature and what their main characteristics?............. 9

2.3 Selection of Sources ........................................................................................... 10 2.3.1 Keywords.............................................................................................. 10 2.3.2 Language of Primary Studies ............................................................... 10 2.3.3 Identification of Source ......................................................................... 10 2.3.4 Type of Primary Studies ....................................................................... 12 2.3.5 Primary Studies Inclusion and Exclusion Criteria ................................. 12 2.3.6 Process to Select Primary Studies ....................................................... 14

2.4 Primary Studies Quality Evaluation ..................................................................... 14 2.5 Information Extraction Strategy........................................................................... 14 2.6 Summary of the Results...................................................................................... 18 2.7 Search................................................................................................................. 18

2.7.1 Search Strings ...................................................................................... 19 2.8 Execution ............................................................................................................ 23 2.9 Analysis............................................................................................................... 23

2.9.1 Priority Levels (PL) ............................................................................... 24 2.9.2 Quantitative and Qualitative Analysis ................................................... 29

3 RESULTS ANALYSIS ........................................................................................ 31

3.1 Quantitative Analysis........................................................................................... 31 3.1.1 Testing Level ........................................................................................ 31 3.1.2 MBT Approaches by Software Domain................................................. 34 3.1.3 Non-functional Requirements ............................................................... 35 3.1.4 Models Used to Describe the Software Structure or Behavior.............. 37 3.1.5 Coverage and Test Case Generation Criteria....................................... 40 3.1.6 Level of Automation.............................................................................. 41 3.1.7 Analysis of Papers References............................................................. 42

3.2 Qualitative Analysis............................................................................................. 47 3.2.1 Limitation of Models, Tools, and Input and Output formats................... 48 3.2.2 Complexity, effort, and cost to apply MBT approaches ........................ 52 3.2.3 Changing the paradigm of MBT and the testing criteria........................ 55 3.2.4 Positive and Negative aspects from MBT approaches ......................... 57

4 TOWARDS NEW RESEARCH REGARDING MBT............................................ 60

Page 3: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 3

4.1 Observations for application or development of a MBT approach....................... 60 4.1.1 The model to describe the software behavior is a key point ................. 60 4.1.2 Testing coverage must address control-flow and data-flow criteria ...... 61 4.1.3 The level of automation must be high................................................... 62 4.1.4 Tool supporting the automation of the steps is fundamental................. 62 4.1.5 The skill and complexity to use any approach must be understood for all stakeholders........................................................................................................ 63 4.1.6 The order of steps to be followed while using a MBT approach ........... 64 4.1.7 Technology transfer.............................................................................. 65

4.2 Future research topics in Model-based Testing .................................................. 66 4.2.1 What contexts (testing levels, software categories) have Model-based Testing strategies not been applied? Why? ........................................................ 66 4.2.2 How are non-functional requirements descriptions used for test cases generation? ......................................................................................................... 69 4.2.3 What are the defects types not detected by the test cases generated from a specific behavior model?.......................................................................... 72

5 CONCLUSIONS.................................................................................................. 76

5.1.1 Summary .............................................................................................. 76 5.1.2 Contributions ........................................................................................ 76 5.1.3 Limitations ............................................................................................ 77 5.1.4 Future Work.......................................................................................... 78

6 BIBLIOGRAPHY REFERENCES ....................................................................... 79

APPENDIX A – LIST OF IDENTIFIED PAPERS .......................................................... 90

APPENDIX B – MODEL-BASED TESTING APPROACHES CLASSIFICATION ...... 102

Page 4: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 4

Figure References

Figure 1. Model-based Testing Activities (PRASANNA et al., 2005) ............................... 8

Figure 2. Papers Categorization .................................................................................... 13

Figure 3. Main Screen on JabRef Tool .......................................................................... 17

Figure 4. The papers classification in accordance with the defined categories. ............ 22

Figure 5. The papers classification after to apply the inclusion/exclusion criteria. ......... 22

Figure 6. Tree describing the relation among the papers A, B, C and D........................ 28

Figure 7. Analysis of MBT approaches by Testing Levels ............................................. 32

Figure 8. Analysis of MBT approaches using Intermediate Model by Testing Levels .... 33

Figure 9. Analysis of MBT approaches using Tools by Testing Level............................ 34

Figure 10. MBT Approaches have not been applied for specific context. ...................... 67

Figure 11. Usability of Functional and Non-functional Requirements in the software

development and MBT process. .................................................................................... 69

Figure 12. Behavior of testing coverage criteria applied to try to discover defects in

software. ........................................................................................................................ 72

Page 5: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 5

Table References Table 1. Fields used to characterize one MBT approach............................................... 16

Table 2. Classification of Identified Papers.................................................................... 21

Table 3. Papers in the Priority Level 1. .......................................................................... 24

Table 4. Papers in the Priority Level 2. .......................................................................... 26

Table 5. Results based on the Testing Level Analysis .................................................. 32

Table 6. Number of MBT approaches for Software Domains x Testing Levels.............. 35

Table 7. Number of approaches using NFR classified according with ISO 9126 (2001).36

Table 8. Number of approaches using types of NFR by Software Domain.................... 37

Table 9. Number of approaches for different Behavior Model ....................................... 38

Table 10. Number of non-automated steps for different complexity level ...................... 41

Table 11. Number of citation for papers in the priority levels 1 and 2 ............................ 43

Table 12. Papers selected for Qualitative Analysis........................................................ 47

Table 13. Qualitative analysis about Models, Tools, Input and Outputs ........................ 49

Table 14. Qualitative analysis about the Complexity to use MBT approaches .............. 53

Table 15. Qualitative analysis about the type of testing coverage criteria ..................... 56

Table 16. Positive and Negative aspects of MBT approaches....................................... 57

Page 6: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 6

1 Introduction

Software Testing is a type of Quality Assurance technique consisting of the

software product dynamic analyses. In other words, software engineers execute the

software aiming at to make it to fail. It contributes for future defect detection by allowing

the software engineers to look for the errors that made the software to fail. Besides, it

could contribute with the confidence improvement that the software product is correct

(adapted from ROCHA et al., 2001).

In the software development process, most of the defects are human based, and

despite the using of better development methods, tools or professionals, defects remain

present in the software products (HOWDEN, 1987). This scenario highlights the

importance of testing for the software development, since testing is one of the last

possibilities to evaluate the software product before its deployment to the final user

(PRESSMAN, 2005). Testing must be accomplished in different levels (unit, integration,

system, acceptance), in accordance with the software development process activities.

According to Beizer (1990) and Juristo et al. (2004), Software Testing can be

considered the most expensive activity into the software development process, and

thus, it needs an efficient planning to avoid loss of resources and behind schedule.

Myers (1979) said that one of the factors that influence testing cost is the total of

designed test cases, because for each test case, resources (such as material and

humans) must be allocated. Therefore, to improve the use of such resources, it could be

interesting to define some criteria regarding the testing coverage and test cases

generation:

• Testing Coverage Criteria: The Software Testing Coverage Criteria allow the

identification of the software parts that must be executed to assure the software

quality and to indicate when the software has been enough tested (RAPPS and

WEYUKER, 1982). In other words, it allows the identification of the percentage

of the software that has been evaluated by a set of test cases (ROCHA et al.,

2001).

• Test Cases Generation Criteria: rules and guidelines used to generate a set

of test cases “T” adequate for a software product (ROCHA et al., 2001).

Page 7: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 7

Over there these concerns related to the testing planning, it is necessary to simplify

or to automate the testing execution or its regression (test cases re-execution after the

software changing process). The generation of test cases can impact testing coverage.

Into this context, Model-based Testing (MBT) appears to be a feasible approach to

control the software quality, reducing the costs related to testing process, because test

cases can be generated from the software artifacts produced throughout the software

development process. Exploring MBT makes software engineers able to accomplish

testing activities in parallel with the development of software.

Model-based Testing consists of a technique for the automatic generation of test

cases using models extracted from software artifacts (i.e. system requirements) (DALAL

et al., 1999). For its using, it’s necessary that some sort of formalized software model

definition (such as formal methods, finite state machines, UML diagrams, etc.) has been

used to describe the software or system behavior.

Despite the fact that MBT could be confound with Test Case Generation, MBT uses

models developed during any software development phase to identify (or generate) the

set of test cases. Test Case Generation is just one of the tasks of the software testing

process, and it could be accomplished even not using a formalized software model.

The Figure 1 describes specific activities of Model-based testing:

• Build the model

• Test Case Generation

o Generate expected inputs

o Generate expected outputs

• Run testing

• Compare actual outputs with expected outputs

• Decide on further actions (whether to modify the model, generate more test

cases, stop testing, or estimate the software reliability (quality))

Page 8: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 8

Figure 1. Model-based Testing Activities (PRASANNA et al., 2005)

The MBT strategy usually includes different levels of abstraction, a behavior model,

the relationship between models and code, test case generation technology, the

importance of selection criteria for test cases, and a discussion of what can be and

cannot be automated when it comes to testing (PRETSCHNER, 2005).

Some Model-based Testing approaches have been proposed in the technical

literature. They use a lot of different software models, describing different software

characteristics, to specify the software. It makes the identification and using of MBT

approaches into software development processes a hard task, mainly considering that

most of the software projects are using the UML as basic notation for the models.

Therefore, the purpose of this work is to identify and characterize the available MBT

approaches by accomplishing a secondary study (systematic review). After this, it is

expected to be possible to describe Model-based Testing state-of-art, revealing the

different perspectives used in the field and highlighting some perspectives for future

research regarding model based testing.

Page 9: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 9

2 Systematic Review Protocol regarding Model-Based Testing

Most of the scientific research has begun by an ad hoc literature review. However,

if this review doesn’t be complete and fair, it does not have some scientific value. This is

the main reason to apply a systematic review. Systematic review is a method to identify,

evaluate and interpret the pertinent research about a particular research question

(KITCHENHAM, 2004). Moreover, there are other specific reasons that justify the use of

systematic review:

• To summarize evidences about a specific theory or technology;

• To identify gaps in certain research areas, contributing for identification of

investigations areas in the research field;

• To provide grounds for new research activities.

Into the context of this work, the main purposes on the execution of the systematic

review execution are to provide grounds for new research activities and to resume the

evidences regarding Model-based Software Testing. Therefore, it has been defined a

review protocol to guide the literature review execution. This review protocol is described

from section 2.1 to section 2.9. Further information regarding systematic reviews, its

process and template can be found in (BIOLCHINI et al., 2005).

2.1 Goal of this Work

To accomplish a systematic review with the purpose of characterize the model-

based software testing approaches usually used in software development projects

described in the technical literature.

2.2 Research Question

2.2.1 P0: What Model-based Software Testing Approaches have been described in the technical literature and what their main characteristics?

Page 10: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 10

• Population: Research on Software Testing.

• Intervention: Model-based Software Testing Approaches.

• Comparison: not applied.

• Effects: Characterization of Model-based Software Testing Approaches.

• Problem: To identify Model-based Software Testing Approaches and their

characteristics.

• Application: Scientific research on Model-based Software Testing Area.

2.3 Selection of Sources

The search must be accomplished in digital library or technical databases. Must be

analyzed only the papers published after 1990. This year has been identified because it

landmarks the publication of UML. Although we search approaches that use different

types of software models for test case generation, we will be focusing, after this

systematic review, on approaches that use UML models to extract test cases.

2.3.1 Keywords

• Population: Software;

• Intervention: Model based Testing, Model driven Testing, UML based Testing,

UML driven Testing, Requirement based Testing, Requirement driven Testing,

Finite State Machine based Testing, Specification Based Testing, Specification

Driven Testing;

• Outcome: approach, method, methodology, technique.

2.3.2 Language of Primary Studies

English.

2.3.3 Identification of Source

Methods of Source Selection:

The sources of studies were accessed by web. In the context of this systematic

review didn’t applied manual search.

Page 11: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 11

Listing of Sources:

• IEEEXplorer

• ACM Digital Library

• Compendex EI

• INSPEC

• Web of Science

• The Software Quality Engineering Laboratory (SQUALL): Technical Reports

(Carleton University): http://squall.sce.carleton.ca/pubs_tech_rep.html. Will be used

the following technical reports:

o A State-based Approach to Integration Testing for Object-Oriented Programs

o A UML-Based Approach to System Testing.

o Improving State-Based Coverage Criteria Using Data Flow Information.

o Revisiting Strategies for Ordering Class Integration Testing in the Presence of

Dependency Cycles.

o Towards Automated Support for Deriving Test Data from UML Statecharts.

• Online Papers About Model-Based Testing available at:

http://www.geocities.com/model_based_testing/online_papers.htm (September 18,

2006). Will be used the following papers:

o ABDURAZIK, A.; OFFUTT, J.; “Using UML Collaboration Diagrams for Static

Checking and Test Generation”; UML 2000 - The Unified Modeling Language.

Advancing the Standard. Third International Conference, York, UK, October

2000, Proceedings, Springer, 2000, 1939, 383-3.

o ANDREWS, A.; OFFUTT, J.; ALEXANDER., R.; “Testing web applications by

modeling with FSMs”; Software Systems and Modeling, 2005, 4(2).

o MEYER, Steve; SANDFOSS, Ray; “Applying Use-Case Methodology to SRE

and System Testing”; STAR West Conference, Oct. 1998, pp. 1-16.

• Kamran Ghani’s Website (PhD Student at York University). Some papers related to

Model-Based Testing available at: http://www-

users.cs.york.ac.uk/~kamran/papers.htm#R2 (September 18, 2006). Will be used the

following papers:

Page 12: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 12

o SCHEETZ, M.; von MAYHAUSER, A.; FRANCE, R.; DAHLMAN, E.; HOWE,

A.E.; “Generating Test Cases from an OO Model with an AI Planning System”,

ISSRE '99: Proceedings of the 10th International Symposium on Software

Reliability Engineering, IEEE Computer Society, 1999, 250.

o OFFUTT, Jeff; LIU, Shaoying; ABDURAZIK, Aynur, AMMANN, Paul;

“Generating Test Data from State-Based Specifications”, Journal of Software

Testing, Verification and Reliability, John Wiley & Sons, Ltd, No.13, 2003, pp.

25-53.

o CRICHTON, Charles; CAVARRA, Alessandra; DAVIES, Jim; “Using UML for

Automatic Test Generation” Proceedings of Automated Software Engineering

(ASE), 2001.

• Informatik 2006 - http://www.informatik2006.de/272.html

o NEBUT, C., FLEUREY, F.; “Automatic Test Generation: A Use Case Driven

Approach”, IEEE Trans. Software Engineering, IEEE Press, 2006, 32, 140-

155 (quoted by one paper).

o BERNARD, E., BOUQUET, F., CHARBONNIER, A., LEGEARD, B.,

PEUREUX, F., UTTING, M., TORREBORRE, E.; “Model-Based Testing from

UML Models”, In proceedings: Informatik, 2006 (printed copy).

o ERNITS, J., KULL, A., RAINED, K., VAIN, J.; “Generating Test Sequences

from UML Sequence Diagrams and State Diagrams”, In proceeding:

Informatik, 2006 (printed copy).

2.3.4 Type of Primary Studies

Industrial Experience Report, Theoretical, Proof of Concept, Empirical Studies, and

Experimental Studies.

2.3.5 Primary Studies Inclusion and Exclusion Criteria

• The papers must be available in the Web (digital libraries or technical databases);

• The papers must be written in English;

• The papers must describe model-based software testing approaches.

• The papers must have published from 1990.

Page 13: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 13

After the definition of these criteria, the identified papers must be classified using

one of the following categories:

[A] Papers describing Model-based software Testing Approaches regarding models

extracted from software requirements (Functional Testing). These models must

use UML diagrams as the mechanism to describe the software behavior.

[B] Papers describing Model-based software Testing Approaches regarding models

extracted from software requirements (Functional Testing). These models must

NOT use UML diagrams as the mechanism to describe the software behavior.

[C] Papers describing Model-based software Testing Approaches regarding models

extracted from the software internal structure (Structural Testing based on

architecture, components, interfaces, units). These models must use UML

diagrams as the mechanism to describe the software internal structure.

[D] Papers describing Model-based software Testing Approaches regarding models

extracted from the software internal structure (Structural Testing based on

architecture, components, interfaces, units). These models must NOT use UML

diagrams as the mechanism to describe the software internal structure.

[E] Any other papers that has been collected during the protocol execution but not

related to the description of a MBT approach.

The Figure 2 describes the categorization of papers from the defined

inclusion/exclusion criteria.

Figure 2. Papers Categorization

After the characterization process, the papers categorized within the [A] or [C]

options (MBT Approaches using UML diagrams) will be the Priority Level 1, the subset

of papers categorized within the [B] or [D] options will be the Priority Level 2, and the

MBT Approach for Structural Testing

MBT Approach for Functional Testing

Using UML Not Using UML

[A] [B]

[C] [D]

[E]

Page 14: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 14

remained papers will compose the Priority Level 3. These papers must be selected for

future reading and analysis. Otherwise, the paper categorized within the [E] option must

be excluded. The definition of the Priority Levels will be done during this work.

2.3.6 Process to Select Primary Studies

One researcher will apply the search strategy to identify the primary studies. The

identified papers will be filtered by the researcher that must firstly read the abstract to

produce an initial categorization of the paper. After that, the full text must be read,

checking whether the inclusion/exclusion criteria have been satisfied. In case of any

conflict, a second researcher will make the verification. Besides all the papers have

been read and categorized, a third researcher will double check the selections. After the

process, the researchers must reach an agreement about what papers shall be selected

per each category.

2.4 Primary Studies Quality Evaluation

We do not have defined any checklist to evaluate the quality of primary studies.

The strategy is to consider the quality concerned with the sources selection of these

studies and the application of the inclusion/exclusion criteria by the researchers.

2.5 Information Extraction Strategy

For each selected MBT approach after the filtering process, the researcher must

extract the following information from the paper:

• Title;

• Authors;

• Source;

• Abstract;

• Type of Study: Industrial Experience Report, Theoretical, Proof of Concept,

Empirical Study or Experimental Study (controlled or quasi-experiment).

• Category: [A], [B], [C], [D] or [E];

• MBT approach Description;

Page 15: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 15

• Model used to describe the software behavior;

• Testing Level: acceptance, system (functional, behavior, configuration, security,

stress, performance testing), integration, unit and regression (ROCHA et al.,

2001).

• Software Domain: Execution Platform or Software Development Paradigm:

Embedded System, Real-time System, Web Application, Client-server System,

Object-Oriented Software, Model-based Development, etc.

• Testing Coverage Criteria;

• Test Cases Generation Criteria;

• Testing Design and Generation Technology;

• Tools: Tools used to testing generation and automation;

• Proportion of Automated Steps (X/Y): Number of automated steps (X) per

Number of Steps;

• Complexity of each non-automated steps: Necessary (Initial Behavior Model

Building); Low (Only choices of a specific criteria or option); Medium

(Intermediate Model Building or Test Data Definition); High (Translation from on

model into other model or other task); or Not Applied (if approach doesn’t

describe steps or if there is not non-automated steps);

• Intermediate Model used between behavior model and test cases;

• Non-functional requirement used to generate test cases: if applied, must be

defined the category of non-functional requirement according ISO 9126 quality

attributes;

• Inputs used to generate test cases and the complexity for interpretation them;

• Output generated by the approach and the complexity for interpretation them;

• External software used to support any steps of the approach (modeling,

generation, or execution);

• Limitation of the approach: what are the limitations of the approach? What task is

it not able to do? Ex: only OO software developed using the JAVA language, or

the test cases are generated from a specific and particular model developed

during the work (external people cannot use it).

Page 16: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 16

• Restriction to use the approach: what are the restrictions to use the approach on

field? For example, there is no tool to support, or the algorithm used to generate

test cases doesn’t allow a specific type of diagrams, or some other constraints.

The information must be organized in accordance of the Table 1:

Table 1. Fields used to characterize one MBT approach

Title

Authors

Source

Abstract

Type of Study

Category

Approach Description

Behavior Model

Testing Level

Software Domain

Tests Coverage Criteria

Test Cases Generation Criteria

Tests Design and Generation Technology

Tools to Test Case Generation or Execution

Automated Steps

Complexity of each non-automated steps

Intermediate Model

Non-functional requirements used to generate test cases

Inputs

Complexity of inputs interpretation

Outputs

Complexity of outputs interpretation

External software

Limitation of the approach

Restriction to use the approach on the field

The data must be recorded using the JabRef Tool (References Manager Tool)

(http://jabref.sourceforge.net/). This tool:

Page 17: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 17

- Imports papers references from all digital libraries used in this work after the

search execution (in BibTex format);

- Allows the creation of fields to help the characterization of papers;

- Allows the creation of reports to export information about papers;

- Makes easy the citation of papers during papers writing in MS Word or LaTex

- Allows the filtering of papers to extract some information from them;

In this work, it has been used to (1) support the papers extraction from digital

libraries, and (2) Support the papers characterization (several fields have been created).

A captured screen of JabRef is presented on Figure 3.

Figure 3. Main Screen on JabRef Tool

Page 18: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 18

2.6 Summary of the Results

The results must be organized in tables. Must be accomplished analysis for:

• Identifying the MBT approaches in different testing level.

• Observing different models or languages used by the MBT approaches for test

cases generation regarding the testing level;

• Identifying which approaches use a Intermediate Model for test cases generation,

since this may represents more cost for the testing process;

• Observing what software domains that MBT approaches are more applied;

• Observing the way the MBT approaches define testing coverage and test case

generation criteria (see definition of these terms in section 1) considering the

used models;

• Observing how MBT approaches support the execution of the generated test

cases (if the tasks are executed manually or automatically, for example), and;

• Observing positive and negative aspects in the MBT approaches;

• Identifying future research perspectives regarding model-based software testing.

2.7 Search

Some search strings have been defined for the identification of primary studies on

the selected sources described in section 2.3.3. For more search precision, it is

necessary to fit the strings in accordance with the syntax of each used search machine

in this systematic review.

To evaluate the results of the search strings, the following set of papers can be

used as control, because they represent important information concerned with MBT

research:

(LUND and STØLEN, 2006), (HARTMANN et al., 2004), (LINZHANG, 2004),

(PRETSCHNER et al., 2004), (BERTOLINO and GNESI, 2003), (BEYER et al., 2003),

(TAHAT et al., 2001), (HARTMANN et al., 2000), (DALAL et al., 1999)

Page 19: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 19

2.7.1 Search Strings

P0: IEEEXplorer

• (approach or method or methodology or technique) and ((specification based test)

or (specification driven test) or (model based test) or (model driven test) or (use

case based test) or (use case driven test) or (uml based test) or (uml driven test)

or (requirement based test) or (requirement driven test) or (finite state machine

based test) or (finite state machine driven test)) and (software) and (pyr >= 1990

and pyr<=2006)

PS: We could observe that if we use the term “test” or “testing” the output is not different

for this search engine.

P0: Compendex EI

• (approach or method or methodology or technique) and (("model based test") or

("model based testing") or ("model driven test") or ("model driven testing") or

("specification based test") or ("specification based testing") or ("specification

driven test") or ("specification driven testing") or ("use case based test") or ("use

case based testing") or ("use case driven test") or ("use case driven testing") or

("uml based test") or ("uml based testing") or ("uml driven test") or ("uml driven

testing") or ("requirement based test") or ("requirement based testing") or

("requirement driven test") or ("requirement driven testing") or ("finite state

machine based test") or ("finite state machine based testing") or ("finite state

machine driven test") or ("finite state machine driven testing")) and (software)

PS: This search engine has a field to determine the range of year to run the search.

The range defined has been 1990 until 2006.

P0: INSPEC

• (approach or method or methodology or technique) and (("model based test") or

("model based testing") or ("model driven test") or ("model driven testing") or

("specification based test") or ("specification based testing") or ("specification

Page 20: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 20

driven test") or ("specification driven testing") or ("use case based test") or ("use

case based testing") or ("use case driven test") or ("use case driven testing") or

("uml based test") or ("uml based testing") or ("uml driven test") or ("uml driven

testing") or ("requirement based test") or ("requirement based testing") or

("requirement driven test") or ("requirement driven testing") or ("finite state

machine based test") or ("finite state machine based testing") or ("finite state

machine driven test") or ("finite state machine driven testing")) and (software)

PS: This search engine has databases to range of years. The databases selected

were from 1990 until 2006.

P0: ACM Digital Library

• +“model based test” (it is equal +“model based testing”) +software

• +“model driven test” (it is equal “model driven testing”) +software

• +“uml based test” (it is equal “uml based testing”) + software

• +“uml driven test” (it is equal “uml driven testing”) +software

• +“use case based test” (it is equal “use case based testing”) + software

• +“use case driven test” (it is equal “use case driven testing”) +software

• +“requirement based test” (it is equal “requirement based testing”) +software

• +“requirement driven test” (it is equal “requirement driven testing”) +software

• +“specification based test” (it is equal “specification based testing”) +software

• +“specification driven test” (it is equal “specification driven testing”) +software

• +“finite state machine based test” (it is equal “finite state machine based testing”)

+software

• +“finite state machine driven test” (it is equal “finite state machine driven testing”)

+software

PS1: This search engine doesn’t have search fields to determine the range of years.

The selection of the papers by date must be done manually.

Page 21: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 21

PS2: This search engine doesn’t have mechanisms to save the results of the search.

Therefore, it has been accomplished a manual filtering and after this the results were

inserted in the JabRef Tool.

P0: Web of Science

• TS=(("model based test" OR "model based testing" OR "model driven test" OR

"model driven testing" OR "specification based test" OR "specification based

testing" OR "specification driven test" OR "specification driven testing" OR "use

case based test" OR "use case based testing" OR "use case driven test" OR "use

case driven testing" OR "uml based test" OR "uml based testing" OR "uml driven

test" OR "uml driven testing" OR "finite state machine based test" OR "finite state

machine based testing" OR "finite state machine driven test" OR "finite state

machine driven testing") AND (software) AND (approach OR method OR

methodology OR technique))

PS1: This search engine has a field to determine the range of year to run the search.

The range defined was 1990 until 2006.

After the searches using each one of the digital libraries, the following number of papers

has been found (Table 2, Figure 4, and Figure 5):

Table 2. Classification of Identified Papers

Classification Criteria Selected Papers Digital Library

Amount of Papers [A] [B] [C] [D] [E] NC 1 PL 2 PL 3 PL

IEEEXplorer 89 4 37 3 14 31 0 7 10 41

COMPENDEX EI 100 4 29 3 16 36 12 7 12 33

INSPEC 181 7 48 3 29 55 41 10 15 60

ACM 221 12 58 3 39 92 17 15 23 74

Web of Science 33 1 8 0 3 12 9 1 2 9

SQUALL 5 1 0 4 0 0 0 5 0 0

GEOCITIES Website 4 1 1 2 0 0 0 3 1 0

Kamran Ghani’s Website 3 1 0 2 0 0 0 3 0 0

Informatik 2006 3 2 0 1 0 0 0 3 0 0

After filtering

(removing repeated papers) 406 26 93 21 62 147 57 47 31 124

NC = Not classified. The papers were not found or they are not available. PL = Priority Level.

Page 22: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 22

Papers Classification

6%

23%

5%

16%

36%

14%

[A]

[B]

[C]

[D]

[E]

NC

Figure 4. The papers classification in accordance with the defined categories.

Papers Classification after to apply the

inclusion/exclusion criteria

7%

31%

50%

12%

1-priority level

2-priority level

3-priority level

Excluded Papers

Figure 5. The papers classification after to apply the inclusion/exclusion criteria.

The list with all papers identified in this systematic review is presented in the Appendix

A.

Page 23: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 23

2.8 Execution

After the developing of the search protocol, the systematic review was executed.

The execution has been accomplished for all identified research string in each digital

library, but the results have been analyzed as a whole, because one paper could be

found in different digital libraries.

A total of 406 papers have been found. A researcher began the filtering process.

During this process, 204 papers have been excluded (because either their scope was

not related to this work, or they were repeated, or they are not available). After to apply

the inclusion/exclusion criteria, we selected 202 papers to read, extract and analyze

their information: 47 papers in the 1-priority level group, 31 papers in the 2-priority level

group, and 124 papers in the 3-priority level group. The description about each Priority

Level is presented in the section 2.9 (“Analysis”).

2.9 Analysis

To analyze the identified papers, some steps will be accomplished:

1. The inclusion/exclusion criteria (section 2.3.5) will be applied;

a. Only papers in the categories A, B, C, or D, and papers available to

download will be analyzed. Papers in the category “E” or not available will

be excluded of this research.

2. The selected papers have been separated in priority levels, according with some

criteria defined during this work. These criteria will be presented in this section

(section 2.9.1).

a. Firstly, the papers in the Priority Level will be analyzed;

b. Secondly, the papers in the Priority Level 2 will be analyzed;

c. Finally, the papers in the Priority Level 3 will be analyzed;

d. After that, every papers will be analyzed together;

Page 24: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 24

3. For each priority level group, will be analyzed quantitative and qualitative

information about the papers, according with the criteria defined and that will be

showed in this section (section 2.9.2).

2.9.1 Priority Levels (PL)

During this work, three priority levels have been defined to support the papers

analyses. The criteria used to define what papers compose each priority level group are:

- PRIORITY LEVEL 1: Only papers describing Model-based Testing approaches

using UML, that is, every paper in the categories “A” and “C”. The papers are

(Table 3):

Table 3. Papers in the Priority Level 1.

Approach Title Authors

Abdurazik2000 Using UML Collaboration Diagrams for Static Checking

and Test Generation A. Abdurazik, J. Offutt

Ali2005 A State-based Approach to Integration Testing for Object-

Oriented Programs

S. Ali, M.J. Rehman, L.C. Briand, H. Asghar,

Z. Zafar, A. Nadeem

Basanieri2000 A Practical Approach to UML-based Derivation of

Integration Tests F. Basanieri, A.

Bertolino

Bellettini2005 TestUml: user-metrics driven web applications testing C. Bellettini, A.

Marchetto, A. Trentini

Bernard2006 Model-Based Testing from UML Models

E. Bernard, F. Bouquet, A. Charbonnier, B.

Legeard, F. Peureux, M. Utting, E. Torreborre

Bertolino2003 Integration of "components" to test software components A. Bertolino, E.

Marchetti, A. Polini

Bertolino2005 Introducing a Reasonably Complete and Coherent

Approach for Model-based Testing A. Bertolino, E.

Marchetti, H. Muccini

Beyer2003 Automated TTCN-3 test case generation by means of

UML sequence diagrams and Markov chains M. Beyer, W. Dulz, F.

Zhen

Botaschanjan2004 Testing agile requirements models J. Botaschanjan, M.

Pister, B. Rumpe Briand2002 A UML-Based Approach to System Testing L.C. Briand, Y. Labiche

Briand2002a Automating impact analysis and regression test selection

based on UML designs L.C. Briand, Y. Labiche,

G. Soccar

Briand2002b Revisiting Strategies for Ordering Class Integration

Testing in the Presence of Dependency Cycles L.C. Briand, Y. Labiche,

Y. Wang

Briand2004a Towards Automated Support for Deriving Test Data from

UML Statecharts L.C. Briand, Y. Labiche,

J. Cui

Briand2004b Improving State-Based Coverage Criteria Using Data

Flow Information L.C. Briand, Y. Labiche,

Q. Lin

Briand2006 Automated, contract-based user testing of commercial-off-

the-shelf components L.C. Briand, Y. Labiche,

M. Sówka Carpenter1999 Verification of requirements for safety-critical software P. B. Carpenter

Cavarra2003 A method for the automatic generation of test suites from

object models A. Cavarra, C. Crichton,

J. Davies

Page 25: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 25

Chen2002 Specification-based regression test selection with risk

analysis Y. Chen, R. L. Probert,

D. P. Sims

Chevalley2001 Automated Generation of Statistical Test Cases from UML

State Diagrams P. Chevalley and P. T.

Fosse

Crichton2001 Using UML for Automatic Test Generation C. Crichton, A. Cavarra,

J. Davies

Deng2004 Model-based testing and maintenance D. Deng, P.C. Sheu, T.

Wang

Garousi2006 Traffic-aware stress testing of distributed systems based

on UML models V. Garousi, L. C.

Briand, Y. Labiche

Gnesi2004 Formal test-case generation for UML statecharts S. Gnesi, D. Latella, M.

Massink

Gross2005 Model-Based Built-In Tests H-G. Gross, I.

Schieferdecker, G. Din Hartman2004 The AGEDIS tools for model based testing A. Hartman, K. Nagin

Hartmann2000 UML-based integration testing J. Hartmann, C.

Imoberdorf, M. Meisinger

Kansomkeat2003 Automated-generating test case using UML statechart

diagrams S. Kansomkeat, W.

Rivepiboon

Kim99 Test cases generation from UML state diagrams Y.G. Kim, H.S. Hong, D.H. Bae, S.D. Cha

Linzhang2004 Generating test cases from UML activity diagram based

on Gray-box method

W. Linzhang, Y. Jiesong, Y. Xiaofeng, H.

Jun, L. Xuandong, Z. Guoliang

Liuying1999 Test selection from UML Statecharts L. Liuying, Q. Zhichang

Lucio2005 A methodology and a framework for model-based testing L. Lucio, L. Pedro, D.

Buchs

Lund2006 Deriving tests from UML 2.0 sequence diagrams with neg

and assert M.S. Lund, K. Stølen

Meyer1998 Applying Use-Case Methodology to SRE and System

Testing S. Meyer, R. Sandfoss

Mingsong2006 Automatic test case generation for UML activity diagrams C. Mingsong, Q.

Xiaokang, L. Xuandong

Murthy2006 Test ready UML statechart models P.V.R. Murthy, P.C.

Anitha, M. Mahesh, R. Subramanyan

Nebut2006 Automatic Test Generation: A Use Case Driven Approach C. Nebut, F. Fleurey Offutt1999b Generating tests from UML specifications J. Offutt, A. Abdurazik

Offutt2003 Generating Test Data from State-Based Specifications J. Offutt, S. Liu, A.

Abdurazik, P. Ammann

Olimpiew2005 Model-based testing for applications derived from

software product lines E.M. Olimpiew, H.

Gomaa

Riebisch2002 UML-based statistical test case generation M. Riebisch, I.

Philippow, M. Gotze Rumpe2003 Model-based testing of object-oriented systems B. Rumpe

Scheetz1999 Generating Test Cases from an OO Model with an AI

Planning System

M. Scheetz, A. Mayrhauser, R. France, E. Dahlman, A.E. Howe

Sokenou2006 Generating Test Sequences from UML Sequence

Diagrams and State Diagrams D. Sokenou

Traore2003 A transition-based strategy for object-oriented software

testing I. Traoré

Vieira2006 Automation of GUI testing using a model-driven approach M. Vieira, J. Leduc, B.

Hasling,R.Subramanyan,

Page 26: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 26

J. Kazmeier

Wu2003 UML-Based Integration Testing for Component-Based

Software Y. Wu, M-H. Chen, J.

Offutt

Zhen2004 Model-based testing with UML applied to a roaming

algorithm for Bluetooth devices

R.D. Zhen, J. Grabowski, H.

Neukirchen, H. Pals

- PRIORITY LEVEL 2: From each selected paper (A, B, C, or D) it’s necessary to

look for the papers in the categories “B” or “D” that are referenced by some

selected paper. The citation number must be greater than 2 (>2) to compose the

PL 2 (22 papers), it’s necessary to build a tree connecting every papers

referenced among them (Figure 6). Each node is a paper, and an arrow indicates

that the paper in the arrow’s beginning references the paper in the arrow’s end.

The three is described by a UML Activity Diagram, and the swim lanes represent

the years (from 1990 until 2006). Other 9 papers published between 2004 and

2006 will be included subjectively, because these papers cannot have an

expressive number of citations. The papers that compose the PL 2 are at total 31

papers, but some of them describe repeated approach, then just one paper by

approach has been analyzed (Table 4):

Table 4. Papers in the Priority Level 2.

Approach Title Authors

Ammann1994 (Amla1992)

Using formal methods to derive test frames in category-partition testing

(Using Z specifications in category partition testing)

P. Ammann, J. Offutt (N. Amla, P. Ammann)

Andrews2005 Testing web applications by modeling with FSMs A. Andrews, J.

Offutt, R. Alexander

Barbey1996 A theory of specification-based testing for object-oriented

software S. Barbey, D. Buchs,

C. Péraire

Bousquet1999a Lutess: a specification-driven testing environment for

synchronous software

L. du Bousquet, F. Ouabdesselam, J.-L. Richier, N. Zuanon

Chang1999 (Chang1996)

Structural specification-based testing: automated support and experimental evaluation

(Structural specification-based testing with ADL)

J. Chang, D. J. Richardson

(J. Chang, D.J. Richardson, S.

Sankar)

Chen2005 An approach to integration testing based on data flow

specifications Y. Chen, S. Liu, F.

Nagoya

Dalal1999 (Dalal1998)

Model-based testing in practice Model-based testing of a highly programmable system

S.R. Dalal, A. Jain, N. Karunanithi, J.M Leaton, C.M. Lott,

Page 27: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 27

G.C Patton, B.M Horowitz

(S.R. Dalal, A. Jain, N. Karunanithi, J.M. Leaton, C.M. Lott)

Friedman2002 Projected state machine coverage for software testing G. Friedman, A.

Hartman, K. Nagin, T. Shiran

Gargantini1999 Using model checking to generate tests from requirements

specifications A. Gargantini, C.

Heitmeyer

Hong2000 A test sequence selection method for statecharts H.S. Hong, Y.G.

Kim, S.D. Cha, D.H. Bae, H. Ural

Legeard2004 Controlling test case explosion in test generation from B

formal models B. Legeard, F.

Peureux, M. Utting

Mandrioli1995 Generating test cases for real-time systems from logic

specifications

D. Mandrioli, S. Morasca, A.

Morzenti Offut1999a Generating test data from SOFL specifications J. Offutt, S. Liu

Paradkar2004a Plannable test selection criteria for FSMs extracted from

operational specifications A. Paradkar

Parissis1996 Specification-based testing of synchronous software I. Parissis, F.

Ouabdesselam

Pretschner2001 (or Pretschner2001a)

Model based testing in evolutionary software development (Model-based testing for real : the inhouse card case study)

A. Pretschner, H. Lotzbeyer, J. Philipps (A.

Pretschner, O. Slotosch, E.

Aiglstorfer, S. Kriebel)

Richardson1992 Specification-based test oracles for reactive systems D.J. Richardson, S.L. Aha, T.O.

O'Malley

Richardson1996 Software testing at the architectural level D.J. Richardson,

A.L. Wolf

Satpathy2005 ProTest: An Automatic Test Environment for B Specification M. Satpathy, M.

Leuschel, M. Butler

Sinha2006a Model-based functional conformance testing of web services

operating on persistent data A. Sinha, A.

Paradkar Stobie2005a Model Based Testing in Practice at Microsoft K. Stobie

Stocks1993 (or Stocks1993a or

Stocks1996)

Test template framework: a specification-based testing case study

(Test templates: a specification-based testing framework or A Framework for Specification-Based Testing)

P. Stocks, D. Carrington

Tahat2001 Requirement-based automated black-box test generation L.H. Tahat, B.

Vaysburg, B. Korel, A.J. Bader

Tan2004 Specification-based testing with linear temporal logic L. Tan, O. Sokolsky,

I. Lee Xu2006a State-based testing of integration aspects W. Xu, D. Xu

Page 28: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Chara

cteriza

tion o

f M

ode

l-bas

ed S

oft

ware

Test

ing A

ppro

ach

es

S

yste

matic

Rev

iew

Thi

s docu

ment

has

been

dev

eloped

to b

e a

join

t S

CR

/CO

PP

E T

echni

cal R

epor

t at P

ES

C/C

OP

PE

/UF

RJ

-Bra

zil

2

8

Ts

ai1

99

0

Ch

an

g1

99

0

Bo

yd1

99

1

Am

la1

99

2

Ric

ha

rds

on

19

92

Sto

cks

19

93

a

Sto

cks

19

93

Jia

19

93

Am

ma

nn

19

94

Po

sto

n1

99

4

Ha

yes

19

94

We

be

r19

94

Avr

itze

r19

94

Ma

nd

rio

li19

95

Bla

ckb

urn

19

96

Ric

ha

rds

on

19

96

Ch

an

g1

99

6

Ba

rbe

y19

96

Sto

cks

19

96

Pa

ris

sis

19

96

Mo

ras

ca1

99

6

Ha

ga

r19

96

Ch

un

g1

99

6

Wa

tan

ab

e1

99

6

Le

ath

rum

19

96

Mu

rra

y19

97

Pa

rad

kar1

99

7

Jag

ad

ee

sa

n1

99

7

Ho

we

19

97

Do

na

t19

97

Me

yer1

99

8

Mu

rra

y19

98

Pe

rair

e1

99

8

Jag

ad

ee

sa

n1

99

8

Le

i19

98

Co

lby1

99

8

Bo

us

qu

et1

99

8

Da

lal1

99

8

Ch

an

g1

99

8

Offu

tt19

99

b

Ca

rpe

nte

r19

99

Sch

ee

tz1

99

9

Kim

19

99

Offu

tt1

99

9a

Mu

rra

y19

99

Liu

yin

g1

99

9

Ga

rga

ntin

i19

99

Bo

us

qu

et1

99

9

Bo

us

qu

et1

99

9a

Da

lal1

99

9

Ch

un

g1

99

9

Ch

an

g1

99

9

Bib

ers

tein

19

99

To

mita

19

99

Am

ayr

eh

19

99

Ha

rtm

an

n2

00

0Ab

du

razi

k20

00

Ba

sa

nie

ri2

00

0

Be

rto

lino

20

00

Ca

rrin

gto

n2

00

0

Ed

wa

rds

20

00

Ho

ng

20

00

Bu

y20

00

Pa

rad

kar2

00

0

Re

gn

ell2

00

0

Me

mo

n2

00

0

Le

un

g2

00

0

Dia

s2

00

0

Vie

ira

20

00

Ch

en

20

00

Wim

me

l20

00

Ala

ga

r20

00

Cri

chto

n2

00

1

Bo

gd

an

ov2

00

1

Ch

eva

lley2

00

1

Ta

ha

t20

01

Pre

tsch

ne

r20

01

Sa

nth

an

am

20

01

Pa

ris

sis

20

01

Mu

rna

ne

20

01

Jurj

en

s2

00

1

Ch

oi2

00

1

Be

rto

lino

20

01

Pre

tsch

ne

r20

01

a

Rie

bis

ch2

00

2

Ch

en

20

02

Bri

an

d2

00

2

Bri

an

d2

00

2a

Bri

an

d2

00

2b

Pe

ters

20

02

Liu

20

02

Ko

rel2

00

2

Fri

ed

ma

n2

00

2

Fe

cko

20

02

Wim

me

l20

02

Bo

yap

ati2

00

2

Va

ysb

urg

20

02

Ru

mp

e2

00

3 Offu

tt20

03

Ka

ns

om

kea

t20

03

Ca

varr

a2

00

3

Tra

ore

20

03

Be

yer2

00

3

Be

rto

lino

20

03

Wu

20

03

Be

rto

lino

20

03

a

Ru

the

rfo

rd2

00

3

Pa

ris

sis

20

03

Le

un

g2

00

3

Hu

ng

ar2

00

3

Ha

rde

r20

03

Elb

au

m2

00

3

Du

lz2

00

3

Ch

en

20

03

Yu

20

03

Ad

ir2

00

3Sch

roe

de

r20

03

Ph

ilip

ps

20

03

Pro

we

ll20

03

Lin

zha

ng

20

04

Ha

rtm

an

20

04

De

ng

20

04

Bo

tas

cha

nja

n2

00

4

Gn

es

i20

04

Zh

en

20

04

Bri

an

d2

00

4b

Bri

an

d2

00

4a

Pre

tsch

ne

r20

04

a

Pa

rad

kar2

00

4b

Pa

rad

kar2

00

4a

Nils

so

n2

00

4

Mik

uci

on

is2

00

4

La

keh

al2

00

4

Ko

sm

ato

v20

04

Kh

urs

hid

20

04

Lih

ua

20

04

Le

v20

04

Le

ge

ard

20

04

He

imd

ah

l20

04

Co

nra

d2

00

4

Ch

en

20

04

Ch

an

20

04

Ya

o2

00

4

Bri

nks

ma

20

04

Vis

se

r20

04

Bo

de

n2

00

4

Till

ma

nn

20

04

Ta

n2

00

4

Sh

iba

20

04

Olim

pie

w2

00

5

Lu

cio

20

05

Gro

ss

20

05

Be

rto

lino

20

05

Be

lletti

ni2

00

5

Ali2

00

5

Sa

tpa

thy2

00

5

Au

gu

sto

n2

00

5

La

key2

00

5

Ku

liam

in2

00

5

Lo

tt20

05

Kim

20

05

Kim

20

05

a

Lia

ng

20

05

Hto

on

20

05

Gu

o2

00

5

De

vara

j20

05

Da

n2

00

5

Ch

en

20

05

Ch

en

20

05

a

Ca

lam

e2

00

5

Ve

an

es

20

05

Bo

uq

ue

t20

05

Bo

uq

ue

t20

05

a

Va

n2

00

5

Bla

ss

20

05

Be

lli2

00

5

Sto

bie

20

05

a

An

dre

ws

20

05

Sin

ha

20

05

Po

po

vic2

00

5

Pim

en

ta2

00

5

Ch

en

g2

00

5b

Lu

nd

20

06

Ne

bu

t20

06

Mu

rth

y20

06

Min

gs

on

g2

00

6

Vie

ira

20

06

Ga

rou

si2

00

6

Bri

an

d2

00

6

So

ken

ou

20

06

Be

rna

rd2

00

6

Oki

ka2

00

6

Ga

llag

he

r20

06

Xu

20

06

a

Xu

20

06

Xie

20

06

Wh

ale

n2

00

6

Vis

se

r20

06

Vilk

om

ir2

00

6

Sin

ha

20

06

a

Pfa

ller2

00

6

2006

2005

2004

2003

2002

2001

2000

1999

1998

1997

1996

1995

1994

1993

1992

1991

1990

Figure 6

. Tree describing the relation among the papers A, B, C and D.

Page 29: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 29

- PRIORITY LEVEL 3: Every papers in the category “B” or “D” that are not included

in the PL 2. At total, 124 papers are included on the Priority Level 3 (because 11

papers are repeated in the PL2, but will be counted as just 5).

2.9.2 Quantitative and Qualitative Analysis

The analysis will be accomplished in two ways: (1) Quantitative Analysis,

describing statistical information about the selected papers (for example: How many

approaches use a specific model to describe the software behavior?); and (2)

Qualitative Analysis, describing subjective information extracted from each or a group

of selected papers about specific characteristics (for example: After the characterization,

the approaches X and Y have some specifics positive aspects that make easy to apply

them on the field, but have also some negative aspects that can make difficult to use

them in some organizations).

• Quantitative Analysis:

To describe the quantitative results, some information must be extracted from the

papers analysis. This information can be related to:

- Testing Level that the approaches are applied;

- Software Categories that the approaches are applied;

- Level of automation of the approaches;

- Models used to described the software behavior;

- Usage of Tools to support the steps;

- Usage of Intermediate model during the test case generation process;

- Usage of Non-functional requirement to describe the software behavior;

- Visibility of the approach in the Model-based Testing community;

These data may be crossed to define specific scenarios about research regarding

Model-based Testing approaches described on the technical literature. This information

may present scenarios on Software Engineering not covered by Model-based Testing

approaches.

Page 30: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 30

• Qualitative Analysis:

To describe the qualitative results, some subjective information must be extracted

the papers analysis regarding specific characteristics about one or a group of

approaches. This information can be related to:

- Complexity of Cost and effort to apply the approach on the field;

- Skills required to apply the approach on the field;

- Inputs or pre-requirements necessaries to use the approach;

- Limitations (application context, testing level, tools);

- Quality of the outputs generated by the approach

To accomplish the qualitative analysis, just a subset of papers may be used. These

papers are selected after the papers characterization by the researcher using two

criteria: (1) Number of times that the paper was referenced by other papers in the set of

identified papers; (2) Decision of the researcher about the quality of the paper;

Page 31: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 31

3 Results Analysis

The analysis has been accomplished at two levels: Quantitative and Qualitative, for

paper in Priority Levels 1 and 2. The papers in the Priority Level 3 haven’t been

analyzed yet.

3.1 Quantitative Analysis

According to Dalal (1998), Model-based Testing depends on three key elements:

the model used for the software behavior description, the test-generation algorithm

(criteria), and the tools that generate supporting infrastructure for the tests (including

expected outputs). In this section, we discuss about these and other characteristics that

influence MBT approaches’ quality, like testing level, software domain, and automated

steps.

3.1.1 Testing Level

MBT approaches have been characterized based on the testing level that they are

applied. This information is important to show the scope in that researches regarding

MBT approaches have been developed.

Tests may be applied to very small units (Unit Testing), collections of units

(Integration Testing), or entire systems (System Testing – functional and non-functional).

MBT can assist test activities at all levels and in different ways. Moreover, Regression

Testing may be accomplished easily, because if the software changes, the model (or

specification) also changes and the new test cases can be generated from the automatic

generation process.

Therefore, these approaches have been characterized based on:

• The use of intermediate model to generate test cases. Intermediate models are

models generated from a model describing the software (that is used as input by

a MBT approach) to generate test cases. Usually, intermediate model reduces

the complexity of the original model, by making it easy to use an algorithm to

generate test case. Intermediate model also increases the cost of applying a

Page 32: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 32

particular MBT approach, since sometimes it is maybe necessary to translate

manually from one behavioral model into an intermediate model.

• The use of tool to support some step in the test case generation process. Tool

may reduce cost and time to generate test cases, since some steps may be

automated, and to make it easier to use the approach.

Table 5 describes the results about these characteristics:

Table 5. Results based on the Testing Level Analysis

Intermediate Model Support tool Testing Level # approaches

Use Not use Use Not cited

System 48 40% 60% 66% 34% Integration 17 64.7% 35.3% 41.2% 58.8%

Unit/Component 8 50% 50% 62.5% 37.5% Regression 4 25% 75% 50% 50% TOTAL 72 (*) 45.6% 54.4% 59.5% 40.5%

(*) some approaches are applied for more than one testing level

MBT Approaches - PL1 and PL2

63%

22%

10%5%

System Testing

Integration Testing

Unit Testing

Regression Testing

Figure 7. Analysis of MBT approaches by Testing Levels

MBT was originally intended for System Level Testing. This aspect makes high the

number of MBT approaches applied for this testing level, as showing in Figure 7. 63% of

approaches surveyed were used for System Level Testing. Examples of approaches:

(ABDURAZIK and OFFUTT, 2000), (BRIAND and LABICHE, 2002), (MINGSONG et al.,

2006), (OFFUTT and ABDURAZIK, 1999), (OFFUTT et al., 2003), and (STOBIE, 2005).

Subsequently, MBT approaches were applied for other levels, like Integration Testing

(22%) [E.g.: (HARTMANN et al., 2000), (BASANIERI and BERTOLINO, 2000), (WU et

Page 33: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 33

al., 2003)], Regression Testing (5%) and Unit Testing (10%) [E.g.: (CHANG et al., 1996),

(KIM et al., 1999), (BRIAND et al., 2006)]. Unit testing is not a usual abstraction level for

MBT. The purpose of Unit Testing is to tests small module, functions, or classes after

the implementation. There are a lot of available approaches to support structural testing

from the source code.

Intermediate Model x Testing Level

64.71%

50.00%

60.00%54.43%

25.00%

40.00%45.57%

35.29%

75.00%

50.00%

System Testing Integration Testing Unit Testing Regression Testing TOTAL

Testing Level

% Approaches using

Intermediate Model

Use a Intermediate Model Not use a Intermediate Model

Figure 8. Analysis of MBT approaches using Intermediate Model by Testing Levels

Analyzing the usage of Intermediate Model (Figure 8), we observe that

Intermediate Models are applied more for Integration Testing (64%) than System Testing

(60%). One possible reason is that Design or Architectural Models usually don’t describe

control-flow or data-flow information. It is therefore necessary to translate this model into

other model in order to extract paths and data to be used during the tests. In System

Testing approaches, usually there is a model generated from the system functionalities

that describe the steps to be followed and the data to be input.

Page 34: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 34

Tools x Testing Levels

66.00%

41.18%

62.50%

50.00%

59.49%

34.00%

58.82%

37.50%

50.00%

40.51%

System Testing Integration Testing Unit Testing Regression Testing TOTAL

Testing Levels

% of Approaches using Tool

Use a Tool to support the generation Not defined

Figure 9. Analysis of MBT approaches using Tools by Testing Level

The estimation of the number of tools to supporting test case generation process is

difficult, because there are a high number of tools under deployment. However, we

observe that there are more MBT tools for System Testing than for Integration Testing

(Figure 9). The reasons are unknown.

Several proprietary and freeware tools have been developed. To count the amount

of MBT tools available is an unfeasible task. The URL www.opensourcetesting.org lists

open-source testing tools for different tasks, including test case generation.

3.1.2 MBT Approaches by Software Domain

Usually, MBT approaches are applied for specific application domain. This defines

the scope of each approach and when they can be used.

Table 6 lists the software domains and the number of approaches applied per

testing level:

The majority of the selected approaches use UML, an Object-oriented modeling

language. This information explains the high number of approaches applied for Object-

oriented Software.

The majority of non-UML based MBT approaches are applied for Reactive Systems

and Safety-critical Systems. Examples are (LEGEARD et al., 2004), (MANDRIOLI et al.,

1995), (PARISSIS and OUABDESSELAM, 1996), (PRETSCHNER et al., 2001), and

Page 35: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 35

(RICHARDSON et al., 1992). These application domains need high coverage and

quality of software, and therefore need very good testing methods such as MBT.

Table 6. Number of MBT approaches for Software Domains x Testing Levels

Testing Levels Software Domains / Testing Levels System

Testing Integration Testing

Unit Testing Regression Testing

Object-oriented software 17 10 4 2

Not defined 13 3 2 0

Safety-Critical Software 5 0 1 0

Reactive systems 5 1 0 0

Distributed Systems 2 0 0 1

Web Application 2 0 0 0

Component-based Software 0 2 0 0

Embedded System 2 0 0 1

Concurrent Program 1 0 0 0

Aspect-oriented Software 0 1 0 0

Java programs 1 0 0 0

Software Product Line 1 0 0 0

Web Service 0 1 0 0

COTS 0 0 1 0

Software Domain

Any domain 1 0 0 1

Moreover, MBT has not been used sufficiently for software domains like COTS,

Product Line, Aspect-oriented Software, Web Services, and Web Application. Apply

MBT to these software domains could be potential topics of future research. Potential

questions are: what are the limitations of the current Model-based Integration Testing for

Web Service (or other software domain not frequently applied for MBT) and how to avoid

them? How to apply MBT for system testing in Web Applications (or other software

domain not frequently applied for MBT)?

3.1.3 Non-functional Requirements

Software Requirements may be: Functional or Non-functional.

Functional Requirements define the scenarios and functionalities that the software

must provide during its execution. Non-functional Requirements (NFR) are essential to

define architectural aspects and constraints during the software development. Both need

Page 36: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 36

to be tested. Generation of test cases from non-functional requirements description is

sometimes not performed. We describe bellow MBT approaches used to non-functional

test case generation.

The categories of NFR have been defined using ISO 9126 (2001). This standard

defines quality attributes for software in 6 groups (Functionality, Reliability, Usability,

Maintainability, Portability, and Efficiency). However, Functionality is composed of other

sub-categories (including functional requirements). Then, only the sub-category Security

related to the group Functionality has been used to classify non-functional requirement

in this work. According with ISO-9126 (2001), the groups are:

• Efficiency: A set of attributes that bear on the relationship between the level of

performance of the software and the amount of resources used, under stated

conditions.

• Reliability: set of attributes that bear on the capability of software to maintain its

level of performance under stated conditions for a stated period of time.

• Usability: A set of attributes that bear on the effort needed for use, and on the

individual assessment of such use, by a stated or implied set of users.

• Maintainability: A set of attributes that bear on the effort needed to make

specified modifications.

• Portability: A set of attributes that bear on the ability of software to be transferred

from one environment to another.

• Security: An attribute that bear on the ability of software to maintain its behavior

and integrity in specific situations or context defined during its development.

Table 7. Number of approaches using NFR classified according with ISO 9126 (2001).

# Approaches NFR (according with ISO 9126) Priority Level 1 Priority Level 2 TOTAL

Efficiency 4 4 8

Usability 0 0 0

Reliability 1 0 1

Maintainability 0 0 0

Portability 0 0 0

Security 0 4 4

Page 37: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 37

From Table 7, we observe that the majority of MBT approaches that address the

evaluation of NFR are for Efficiency. At total, 8 approaches use efficiency description for

testing: (OFFUTT and ABDURAZIK, 1999), (RICHARDSON et al., 1992), (RUMPE,

2003), (ZHEN et al., 2004), (TAN et al., 2004), (GAROUSI et al., 2006), (MANDRIOLI et

al., 1995), and (PRETSCHNER et al. 2001a). All 8 approaches use information about

the response-time (performance) during test case generation. Analyzing the usage of

NFR description during the testing case generation for different software domains, we

observe that Efficiency requirements are used by OO Software, Distributed System,

Reactive System, Critical System, or Embedded Systems. Other software domains don’t

address NFR for test case generation.

Moreover, 4 approaches use Security requirements during test case generation, all

of them in Priority Level 2: (RICHARDSON et al., 1992), (TAN et al., 2004), (PARISSIS

and OUABDESSELAM, 1996), and (du BOUSQUET et al., 1999). These requirements

are usually used by Reactive Systems. This software domain has well-defined

approaches to described securities properties. However, these application domains

have intersection, and specific software could be classified on all these domains

together.

Table 8. Number of approaches using types of NFR by Software Domain

Domains x NFR Type Efficiency Usability Reliability Maintainability Portability Security

Object-oriented Software

3 0 0 0 0 0

Distributed System 1 1 0 0 0

Reactive System 1 0 0 0 0 3

Critical System 1 0 0 0 0 0

Embedded System 1 0 0 0 0 0

Not defined 1 0 0 0 0 1

Several types of NFR, like usability, maintainability, portability, have not been

tested using MBT. These aspects could be explored in future research.

3.1.4 Models Used to Describe the Software Structure or Behavior

The model used to describe the software structure or behavior is an important

element in the study of MBT. The model defines the limitation of each approach based

Page 38: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 38

on the information that it can represents about the software structure or behavior.

Sometimes the model is used for a specific application domain and cannot be used in

another domain.

Table 9 lists behavior models and the number of approaches using each model:

Table 9. Number of approaches for different Behavior Model

# Approaches Behavior Model

Priority Level 1 Priority Level 2 TOTAL

Statechart Diagram 26 1 27

Class Diagram 19 0 19

Sequence Diagram 19 0 19

Use Case Diagram 11 0 11

OCL 11 0 11

(Extended) Finite State Machine 3 7 10

Activity Diagram 9 0 9

Collaboration Diagram 8 0 8

Object Diagram 7 0 7

Graph 6 1 7

Z Specification 0 4 4

UML 2.0 Test Profile 2 0 2

Markov Chain Usage Model 2 0 2

XML Model 1 1 2

Abstract State Machine 1 1 2

CDFD 0 2 2

LUSTRE Model 0 2 2

UML 2.0 Package Diagram 1 0 1

UML 2.0 Interaction Overview Diagram 1 0 1

UML 2.0 Context Diagram 1 0 1

TTCN-3 1 0 1

Transition Test Sequence 1 0 1

Test Model (graphs + constraints) 1 0 1

State COllaboration TEst Model 1 0 1

Operational Profile 1 0 1

IOLTS (input enabled labeled transition systems over i/o-pair)

1 0 1

Intermediate Format 1 0 1

Extended Context Free Grammar Model 1 0 1

CSPE Constraints 1 0 1

Feature model 1 0 1

Component-based software architecture model 1 0 1

Simulation Model 1 0 1

Page 39: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 39

AETGSpec 0 1 1

SCR Specifications 0 1 1

TRIO Specification 0 1 1

DNF 0 1 1

S-Module 0 1 1

I-Module 0 1 1

SALT Model 0 1 1

System Structure Diagrams (SDDs), State Transition Diagram (STD), Message Sequence Charts (MSCs), Data Type Diagram (DTD)

0 1 1

Prolog Specification 0 1 1

SDL Model 0 1 1

Linear Temporal Logic (LTL) 0 1 1

CO-OPN/2 Specification 0 1 1

ADL Specification 0 1 1

Chemical Abstract Machine (CHAM) 0 1 1

WSDL-S 0 1 1

State Model in AOP 0 1 1

As said before, the majority of the selected approaches uses UML for test case

generation, hence the high number of approaches applied for Object-oriented software.

For MBT approaches using UML, 27 approaches use Statechart [(BRIAND et al.,

2004b), (HARTMAN and NAGIN, 2004), (HARTMANN et al., 2000), (OFFUTT and

ABDURAZIK, 1999), (OFFUTT et al., 2003), (TRAORE, 2004), and others papers], 19

approaches use Class Diagram [(BASINIERI and BERTOLINO, 2000), (BERTOLINO et

al., 2003), (BRIAND and LABICHE, 2002), (BRIAND and LABICHE, 2006), (VIEIRA et

al., 2006), and others papers], and 19 papers use Sequence Diagram [(BERTOLINO et

al., 2003), (BRIAND and LABICHE, 2006), (RUMPE, 2003), (WU et al., 2003), and

others papers]. These are the most used models.

For MBT approaches not using UML, 7 approaches use Finite State Machine

[(ANDREWS et al., 2005), (KIM et al., 1999), PARADKAR, 2004), (STOBIE, 2005),

(TAHAT et al., 2001), and others papers] and 4 approaches use Z Specification [(AMLA

and AMMANN, 1992), (AMMANN and OFFUTT, 1994), (LEGEARD et al., 2004),

(STOCKS and CARRINGTON, 1996)]. These are the most used models for this context.

There is not enough information to decide if one model is more useful than another

model. This definition depends on a lot of variables (or characteristics), like the software

Page 40: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 40

domains, algorithm used for tests generation, and testing level. However, knowing the

information is useful to filter approaches for comparison.

More information about Models used to describe the software structure and

behavior and their limitations is described on the Qualitative Analysis section (3.2).

3.1.5 Coverage and Test Case Generation Criteria

The main differences among MBT approaches are the criteria used to define the

testing coverage (subset of test cases generated from a specific model) and test

generation (steps to be accomplished).

The Testing Coverage Criteria defines the subset of test cases generated by the

approach. These criteria depend on the model used to describe the software behavior.

They may also determine the limitation of the approach, and the type and format of test

cases that can be generated. The types of criteria used are control-flow (like transforming

one model in a graph), and data-flow (defining the possible inputs for a specific field).

Popular testing coverage criteria used in control-flow which are based on graph

theory are: all-states (all-nodes), all-transactions, simple-paths (single-paths), all-paths,

n-Path Coverage, all-transaction-pair, and loop-coverage. In data-flow, common testing

criteria are: boundary value coverage, equivalence class, category-partition, one-

value/All-values coverage, all-definition, all-use, and all def-use paths coverage.

Details on what approaches use data-flow or control-flow criteria are presented on

the Quantitative Analysis section (3.2).

The Test Case Generation Criteria defines the steps to be performed to generate

test cases from a software behavior model. Each approach has its specific steps with

different complexity or automation level. The steps are: (1) Modeling of Software

Behavior; (2) Applying Testing Coverage Criteria to select the set of test cases; (3)

Generation of test cases and expected results; (4) Execution of tests and comparison of

obtained results against the expected results.

In the ideal case, test cases are fully generated automatically from the software

behavior model, there are no manual steps, thus reducing time, effort, cost and also

increasing testing coverage. However, some approaches have intermediate non-

Page 41: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 41

automated steps in its testing process, with different complexity levels. The discussion

about the complexity of non-automated steps is presented in the next section (3.1.6).

Other important issues regarding the steps in a test case generation process is the

automatic support provided by tools. Tools make feasible the application of a MBT in a

project, automating some steps and supporting the performing of non-automated steps

and the integration of data between the steps. The discussion about the using of tool by

MBT approaches was described previously on section 3.1.1.

3.1.6 Level of Automation

One of the more important aspects in a MBT approach is the level of automation of

its tasks. Automation means less cost, time, and effort to generate test cases. Usually,

the testing process of MBT approaches is composed of automated and non-automated

steps. It is necessary to analyze the complexity of non-automated steps in a MBT

approach. A single manual step in a significantly automated approach can be harder

than several manual in other approach.

A non-automated step may have different complexity level depending upon

modeling of software behavior (every approach need to execute this task), choice of

testing criteria, modeling of intermediate model, or translation from one model into other

model.

The classification of non-automated steps for each MBT approach is listed in the

Appendix B. Table 10 lists the summary of non-automated steps classification for

complexity levels:

Table 10. Number of non-automated steps for different complexity level

# Approaches Step Complexity Level

Priority Level 1 Priority Level 2 TOTAL

Necessary (Initial Behavior Model Modeling) 42 23 65

Low (Only choices) 8 6 14

Medium (Intermediate Model Modeling or Test Data Definition)

13 3 16

High (Translation or other task) 8 0 8

Not Applied 2 1 3

Page 42: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 42

All approaches have at least one non-automated step: The initial modeling of

software behavior. For this reason, this step has been classified as “Necessary”. But in

some approaches the initial modeling is a hard task, involving translation between

models, data modeling, etc.

Other types of non-automated step may represent extra effort, time, and cost in

addition to the application of the MBT approach. Manual steps to select a choice have

been classified as Low Complexity because the tester needs just to select one option

(testing criteria to be applied, test case to be executed or filtered, etc…). This manual

step was found in 14 approaches. Examples are: (du BOUSQUET et al., 1999;

MANDRIOLI et al., 1995; MINGSONG et al., 2006), and (TAHAT et al., 2001). Manual

steps that require intermediate-model modeling or test data definition by hand have

been classified as Medium Complexity. They were found in 16 approaches. Examples

are: (ANDREWS et al., 2005), (BASANIERI and BERTOLINO, 2000), (OFFUT et al.,

2003), and (Vieira et al., 2006). Several approaches use intermediate model during the

test case generation process, but in the most of cases are generated automatically.

Manual generation of intermediate model is a hard task. In several UML based MBT

approaches (13 approaches), intermediate models are need for test data definition. This

is because UML diagrams describe usually just the software structure, and not the

inputs.

Translation from one model into another model is a hard step. It involves the

application of rules, constraints to derive one model and can make the test generation

process very difficult or even impossible. This step has been classified as High

Complexity and was found in 8 approaches (all in Priority Level 1 approaches).

Examples are: (BRIAND et al., 2004), (OFFUT et al., 2003), and (RUMPE, 2003).

3.1.7 Analysis of Papers References

The level of acceptance of any approach by the Software Engineering community

is an important factor in our analysis. The number of citations for each paper may

indicate the paper/approach visibility or quality in the community. Papers are likely to be

cited more often by other authors when describing interest concepts or results regarding

a research field.

Page 43: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 43

Table 11 lists the number of citation for each paper in both priority levels:

Table 11. Number of citation for papers in the priority levels 1 and 2

Approach # citations # papers in PL 1 and 2

cited by it Priority Level

Offutt1999b 24 2 1

Richardson1992 17 0 2

Stocks1996 (1) 13 0 2

Offutt2003 12 10 1

Briand2002 12 3 1

Hartmann2000 11 1 1

Kim1999 10 1 1

Gargantini1999 9 3 2

Abdurazik2000 9 1 1

Stocks1993a (1) 8 0 2

Chang1996 7 1 2

Dalal1999 7 1 2

Stocks1993 (1) 7 1 2

Parissis1996 6 1 2

Offut1999a 5 4 2

Ammann1994 5 3 2

Basanieri2000 4 3 1

Mandrioli1995 4 1 2

Amla1992 4 0 2

Dalal1998 4 0 2

Pretschner2001 (2) 4 0 2

Pretschner2001a (2) 4 0 2

Chang1999 3 3 2

Friedman2002 3 2 2

Riebisch2002 3 2 1

Bousquet1999a 3 1 2

Richardson1996 3 1 2

Barbey1996 3 0 2

Hong2000 3 0 2

Tahat2001 3 0 2

Chevalley2001 2 3 1

Kansomkeat2003 1 5 1

Nebut2006 1 5 1

1 These three papers describe the same approach, and they were analyzed as one paper. 2 These two papers describe the same approach, and they were analyzed as one paper.

Page 44: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 44

Paradkar2004a (3) 1 5 2

Linzhang2004 1 4 1

Wu2003 1 3 1

Bertolino2003 1 2 1

Scheetz1999 1 2 1

Briand2002a 1 1 1

Liuying1999 1 0 1

Ali2005 0 9 1

Bertolino2005 0 9 1

Murthy2006 0 7 1

Sinha2006a (3) 0 6 2

Briand2004a 0 5 1

Chen2005 (3) 0 5 2

Briand2004b 0 4 1

Sokenou2006 0 4 1

Lund2006 0 3 1

Vieira2006 0 3 1

Andrews2005 (3) 0 3 2

Satpathy2005 (3) 0 2 2

Botaschanjan2004 0 1 1

Garousi2006 0 1 1

Hartman2004 0 1 1

Rumpe2003 0 1 1

Traore2003 0 1 1

Zhen2004 0 1 1

Tan2004 (3) 0 1 2

Htoon2005 (3) 0 1 2

Xu2006a (3) 0 1 2

Bellettini2005 0 0 1

Bernard2006 0 0 1

Beyer2003 0 0 1

Briand2002b 0 0 1

Briand2006 0 0 1

Carpenter1999 0 0 1

Cavarra2003 0 0 1

Chen2002 0 0 1

Crichton2001 0 0 1

Deng2004 0 0 1

Gnesi2004 0 0 1

Gross2005 0 0 1

3 These ten papers were selected subjectively by the researchers.

Page 45: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 45

Lucio2005 0 0 1

Meyer1998 0 0 1

Mingsong2006 0 0 1

Olimpiew2005 0 0 1

Stobie2005a (3) 0 0 2

Belletini2005 0 0 1

Okika2006 (3) 0 0 2

One of the most cited papers is from (OFFUT and ABDURAZIK, 1999). This is

recognized as the first published MBT approach using UML. The approach uses

statechart for Model-based System Testing. In the Top 10, 6 papers use UML, they are:

(OFFUT and ABDURAZIK, 1999), (OFFUT et al., 2003), (BRIAND and LABICHE, 2002),

(HARTMANN et al., 2000), (KIM et al., 1999), (ABDURAZIK and OFFUT, 2000), and 4

papers use non-UML approaches (2 of them describe the same approach), they are:

(RICHARDSON et al., 1992), (STOCKS and CARRINGTON, 1996), (GARGANTINI and

HEITMEYER, 1999), and (STOCKS and CARRINGTON, 1993a).

Papers with high visibility and popular among research groups provide valuable

information on new directions in MBT research and provide the state-of-art. These

approaches need to be compared closely with emerging approaches.

We have also identified interesting pre-1990 papers. These papers describe

fundamentals concepts that have been used in later MBT approaches. The papers are:

• C. V. RAMAMOORTHY, S. F. HO, and W. T. CHEN, “On the automated

generation of program test data”, IEEE Transactions on Software

Engineering, SE-2(4):293–300, Dec. 1976.

This paper described the seminar approach for TEST DATA generation. In this

approach, given a program graph, a set of paths are identified to satisfy some

given testing criteria. It generates test data from FORTRAN programs. This paper

is referenced by several MBT approaches to indicate the origin of automated

tests generation.

• T.S. CHOW, “Testing software design modeled by finite-state machines”,

IEEE Trans. Software Eng., vol. SE-4, pp. 178-187, Mar, 1978.

This paper is the seminar approach for Control-flow based Testing. It proposes a

method of testing the correctness of control structures that can be modeled by a

Page 46: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 46

finite-state machine. Test results derived from the design are evaluated against

the specification. The method is based on a result in automata theory and can be

applied to software testing. This paper is referenced by several MBT approaches

to indicate the origin of automated tests generation based on control-flow.

• D. L. BIRD and C. U. MUNOZ, “Automatic generation of random self-

checking test cases”, IBM Systems Journal, 22(3):229–245, 1983.

This paper presents a Random test case generation techniques [2] generate test

cases by randomly selecting input values. This technique may be applied to

building generators for testing several different types of program, including PL/l

compilers, a graphics display manager, and sodmerge routines. The authors

emphasize that although concepts of test case generation are similar for different

types of software, each test case generator is a specific tool that must be coded

separately. This information is referenced a lot of times by MBT approaches.

• S. RAPPS, S. and E.J. WEYUKER, “Data Flow analysis techniques for test

data selection”, In: International Conference on Software Engineering, pp.

272-278, Tokyo, Sep, 1982.

This paper defines a family of program test data selection criteria derived from

data flow analysis techniques similar to those used in compiler optimization. It is

argued that currently used path selection criteria, which examine only the control

flow of a program, are inadequate for testing. This procedure associates with

each point in a program at which a variable is defined, those points at which the

value is used. Several test data selection criteria, differing in the type and number

of these associations, are defined and compared. This paper is referenced by

several MBT approaches, usually, when they are using some test-data criteria

defined and compared in this paper.

• T. J. OSTRAND and M. J. BALCER, “The Category-Partition Method for

Specifying and Generating Functional Tests”, Communications of the ACM,

31(6), June 1988, pp. 676-686.

This paper describes by the first time the Category-Partition Method for Testing

Functional Generation. It is method for creating functional test suites in which a

Page 47: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 47

test engineer analyzes the system specification, writes a series of formal test

specifications (categories and partitions of values for each categories) to produce

test descriptions. This method is used by a lot of MBT approaches for test data

definition.

Appendix B is a summary of all approach analyzed during this work. More detailed

information (subjective information, summary, limitations and restrictions) about each

approach is available on JabRef Tool’s database.

3.2 Qualitative Analysis

This section describes the qualitative analysis, that is, subjective information on

limitations, cost, effort and complexity, input, pre-requirements or skill required to use

them, and quality of the outputs generated for various MBT approaches.

The Qualitative Analysis has been accomplished for 15 papers selected using the

following criteria:

• The first 12 papers (removing repeated approaches) have the highest number of

citation;

• 3 papers of select interest from 2004 to 2006 have been analyzed.

The papers are described on Table 12:

Table 12. Papers selected for Qualitative Analysis

Approach Title Authors

Abdurazik2000 Using UML Collaboration Diagrams for Static

Checking and Test Generation A. Abdurazik, J. Offutt

Briand2002 A UML-Based Approach to System Testing L.C. Briand, Y. Labiche

Chang1996 Structural specification-based testing with ADL J. Chang, D.J.

Richardson, S. Sankar

Dalal1999 Model-based testing in practice

S.R. Dalal, A. Jain, N. Karunanithi, J.M Leaton, C.M. Lott, G.C Patton,

B.M Horowitz

Gargantini1999 Using model checking to generate tests from

requirements specifications A. Gargantini, C.

Heitmeyer Hartmann2000 UML-based integration testing A. Hartman, K. Nagin

Kim1999 Test cases generation from UML state diagrams Y.G. Kim, H.S. Hong, D.H. Bae, S.D. Cha

Page 48: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 48

Mingsong2006 (4) Automatic test case generation for UML activity

diagrams C. Mingsong, Q.

Xiaokang, L. Xuandong Offutt1999b Generating tests from UML specifications J. Offutt, A. Abdurazik

Offutt2003 Generating Test Data from State-Based Specifications J. Offutt, S. Liu, A.

Abdurazik, P. Ammann

Parissis1996 Specification-based testing of synchronous software I. Parissis, F.

Ouabdesselam

Richardson1992 Specification-based test oracles for reactive systems D.J. Richardson, S.L. Aha, T.O. O'Malley

Stobie2005a (4) Model Based Testing in Practice at Microsoft K. Stobie

Stocks1996 A Framework for Specification-Based Testing P. Stocks, D. Carrington

Vieira2006 (4) Automation of GUI testing using a model-driven

approach

M. Vieira, J. Leduc, B. Hasling, R. Subramanyan,

J. Kazmeier

3.2.1 Limitation of Models, Tools, and Input and Output formats

The three key elements in a MBT approach are: the model used for the software

behavior description, the tools that generate supporting infrastructure for the tests

(including expected outputs), and the test-generation algorithm (criteria). In this

section, we discuss qualitatively about the first two – model and tool, their

characteristics, and limitations, for each of the selected approach.

The models must describe the software structure or behavior, and then it defines

what is possible to test using a MBT approach. We can define two types of models in a

MBT approach: (1) an initial model, that describes the software specification from the

software development process, and (2) an intermediate model, that it is used to support

the test case generation.

The intermediate model is used to define just the relevant information for the test

generation process, excluding information and characteristics not related to this process.

If it is performed automatically, there is no additional cost, effort, time, or complexity for

this step. Otherwise, it may represent more cost, effort, time, complexity for the process,

and introduce a source of error to the MBT approach. Ideally, tests are generated

without intermediate model.

The correctness of a behavioral model is essential to applying the MBT approach.

If the model is wrong, the generated tests will be invalid, e.g. they checking nothing. To

solve this problem, several model-checking approaches have been defined and 4 This paper is one of the papers that have been included for qualitative analysis subset subjectively by the researchers.

Page 49: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 49

integrated with MBT approaches. Examples in the 1st and 2nd Priority Level are:

(GARGANTINI and HEITMEYE, 1999), (ABDURAZIK and OFFUT, 2000), (BRIAND and

LABICHE, 2002), (TAN et al., 2004), and (SATPATHY et al., 2005). Table 13 identifies

which of the selected papers use model-checker during test case generation.

The second key element is the type of tools that is used to support the test case

generation process. In an ideal case, a single tool must support all steps starting for

software behavior modeling (or importing inputs from other external tool) until the

generation of expected results, automating all the main steps of the approach. The

quality of a MBT tool depends on the level of automation (LA) of the test generation

process, and the input (IN) used and output (OUT) generated by the tool. The

format/notation used as input or output determines the feasibility of extending the tool.

For example, tools using some of known notations, like XMI (standard to describe UML

Specification), XMI, B language, or TSL, are extended easier than tools using other

unknown format/notation.

Sometimes an approach may require the use of external tools (ET) for steps such

as generation of a software behavior model or test scripts to be executed in a specific

platform. This restricts the use of the approach. Some external tools are proprietary, and

require a license fee.

Table 13 shows the quality of tools used for each MBT approach. The symbol

indicates that the tool characteristics analyzed are good for a MBT approach, because

use a defined format and generate test cases automatically; the symbol indicates that

the tool characteristics don’t make easy the test generation process, but they also don’t

confuse this process, for example because some steps are not-automated or the format

of input or output is not defined; and the symbol indicates that the tool don’t have

good characteristics for MBT, for example if all steps are not-automated or there is not

tool to support this MBT approach.

Table 13. Qualitative analysis about Models, Tools, Input and Outputs

Model Analysis MBT Tool Analysis Approach

Limitation of Behavior Model Using Model Checker

Attributes Status

Abdurazik2000

UML Collaboration Diagrams. This model describes just the relation among classes in a specific sequence in object-oriented software, but there is not information about the timing

Using There is no tool defined

to support this approach

Page 50: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 50

and parallelism between events. It describes the control-flow, and the test data are defined

by the instance of classes (objects). Information about non-functional requirement (like timing,

parallelism between events) cannot be described.

Briand2002

6 UML Diagrams. The use of 6 models allows a more precise modeling, but the effort to model

and be consistent is hard. This approach uses a strategy to check the

models before the test generation.

Using

• LA (Level of Automation): all-steps

• IN (Input): generated by itself

• OUT (Output): use a format defined by OMG

• ET (External Software): not used

Chang1996

ADL Specification. This language defines just assertions (a Boolean expression that must

evaluate to true at the termination of function execution) about software (from calls and

requests) and doesn’t allow to model complex behavior. The test data must be defined

externally using a specific description format. This model is applied just for Unit testing a=in

specific software categories.

Not using

• LA: all-steps • IN: generated by itself • OUT: test drivers in C

format • ET: not used

Dalal1999

AETGSpec. This model is useful to describe just the data-flow (no control-flow) and it is not

expressive to describe complex constructs. According with the authors, training is

necessary for testers to be able to use it.

Not using

• LA: all-steps • IN: generated by itself • OUT: test cases in a

table format • ET: not used

Gargantini1999

SCR Specification is based on a user-friendly tabular notation and offers several automated

techniques for detecting errors in software requirements specifications, including an automated consistency checker to detect

missing cases and other application-independent errors. An SCR requirements

specification describes both control-flow and data-flow, and test cases are generated like a

counterexample of sequences. Non-determinist transitions may be modeled. However, non-

functional requirements cannot be described.

Using

• LA: all-steps • IN: imported from a

model checker (SMV or SPIN)

• OUT: test sequences in text file format

• ET: Model checker (SMV or SPIN)

Hartmann2000

UML State Diagram. This model is useful to describe the software internal behavior and the

set of states that one class can be (control-flow). The tests are applied to check if software behaves as described in the requirements for

each events or transactions. To describe data-flow, categories of states are defined using

category-partition method, from other diagram.

Not using

• LA: all-steps • IN: imported from XMI

format • OUT: test cases in TSL

format • ET: Rational Rose

(proprietary tool)

Kim1999

UML State Diagram. This model is useful to describe the software internal behavior and the

set of states that one class can be (control-flow) and events and transactions (data-flow).

This approach is applied to test the static behavior of the software (classes, but without

generalization and inheritance among classes), since dynamic behavior is not supported.

Not using There is no tool defined

to support this approach

Mingsong2006

UML 2.0 Activity Diagram. This model is useful to describe the limited control-flow for testing

(no timing aspects, for example), but the data-flow is not described. In this approach, the test

data are generated randomly to support test cases generation.

Not using

• LA: all steps • IN: generated by itself

using eclipse plug-in • OUT: test case in

Java code format

Page 51: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 51

• ET: not used

Offutt1999b

UML State Diagram. This model is useful to describe the software internal behavior and the

set of states that one class can be (control-flow). The tests are applied to check if software behaves as described in the requirements for each events or transactions. To describe test

data, instances of objects are used.

Not using

• LA: all-steps • IN: imported from XMI

format • OUT: test cases in

ASCII text file format • ET: Rational Rose

(proprietary tool)

Offutt2003

UML State Diagram. This model is useful to describe the software internal behavior and the

set of states that one class can be (control-flow). Test data are defined like pre-condition or

pos-condition for each state.

Not using

• LA: all-steps • IN: generated by itself • OUT: Test data in

LUSTRE format • ET: not used

Parissis1996

Environment Specification and the safety properties in LUSTRE language.

A LUSTRE program is structured into nodes: a node is a subprogram describing a relation

between its input and output variables. Once declared, a node may be used in any

expression, as a basic operator. LUSTRE allows the modeling of security

properties to evaluate reactive software. Just control-flow is used to generate tests. The

purpose of it to generate TEST DATA.

Using

• LA: test script isn’t generated automatically

• IN: models translated manually from external tools models

• OUT: test scripts • ET: SCR tool and

Rational Rose (proprietary tools)

Richardson1992

System Specification In Z format. It is based on set theory and predicate calculus, uses fairly standard mathematical notation and supports

standard set and predicate operations. Systems are described in Z using a state-based

model (state, event, operation). Training is necessary to use this notation to describe

software behavior using Z.

Not using There is no tool defined

to support this approach

Stobie2005a

FSM and ASML. This model is useful to describe the control-flow for testing and for

model checking, but the data-flow is not described. Moreover the authors say that the cost to develop these models is a little high,

and information about non-functional requirement (like timing, parallelism between

events) cannot be described.

Using

There are two tools:

• LA: all steps • IN: generated by

AsmL tool • OUT: test case in

XML format • ET: not used

Stocks1996

Test Template In Z format. It is based on set theory and predicate calculus, uses fairly

standard mathematical notation and supports standard set and predicate operations.

Systems are described in Z using a state-based model (state, event, operation). Training is necessary to use this notation to describe

software behavior using Z. The software behavior described using this

model is limited and it is used just to generate test data (no control-flow).

Not using There is no tool defined

to support this approach

Vieira2006

UML Use Case, Activity Diagram, and Class Diagrams. Activity Diagrams are used to define

the control-flow for each use case, but are defined notes to link activity and class diagrams

(where is defined test data using category-partition). However, information about non-

functional requirement (like timing, parallelism between events) cannot be described.

Not using

• LA: all steps • IN: generated by itself

using eclipse plug-in • OUT: test case in TSL

format • ET: not used

Page 52: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 52

An important question regarding behavior models is the limitation imposed by the

language used to describe it. Some languages don’t allow the description of control-flow

and data-flow in the same model (or group of models). The language may limit the

definition of the testing coverage.

Other question is related to the language’s power to describe the software

behavior. Some approaches don’t allow the description of complex models to be used

during test cases generation, decreasing the testing quality. UML diagrams are easy to

be used because we are familiar with them, but other notations, like Finite State

Machine (FSM), and B Language are powerful and may also be used for software

modeling.

3.2.2 Complexity, effort, and cost to apply MBT approaches

MBT approaches are, usually, developed by a research group to be applied in a

specific context. After that, it may be used in other general contexts or on the field. To

apply a new approach on the field is a hard task, because it involves training,

adaptation, and consequently time, resources, and cost.

A MBT approach is very useful for systems that change continually (or in constant

evolution) as regenerate test cases make the testers’ task easier.

The ideal way to evaluate an MBT approach is through Experimental Study, but

this may not be feasible. Alternatively, complexity of MBT approaches is analyzed

subjectively based on the skill, complexity, effort, and cost necessary to use it in a real

project.

According to Dalal (1999), many approaches expect the tester to be 1/3 developer,

1/3 system engineer, and 1/3 tester. Unfortunately, such combination of skills is rare and

the budget to hire may not exist, technology that does not adequately take into account

the competence of a majority of its users is not useful.

Table 14 describes usage complexity for each MBT approach. The symbol

indicates that the characteristic evaluated (skill, effort, or cost) has a complexity low-

level, and make easy the use of the MBT approach. The symbol indicates that the

characteristic evaluated has a complexity high-level, and make difficult or unfeasible to

Page 53: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 53

use one MBT approach. And, finally, the symbol indicates that the characteristic

evaluated has a complexity intermediate-level, and it’s necessary to spend an additional

time to avoid these problems, but its use is still feasible.

Table 14. Qualitative analysis about the Complexity to use MBT approaches

Approach Skill Necessary Effort to apply Costs

Abdurazik2000

• UML Modeling

• Software modeling using UML

collaboration diagram • All steps has a strategy for

automation

• There is no tool to

support, then, all steps must be performed manually

Briand2002

• UML and OCL Modeling

• Software modeling using some

UML Diagrams • All steps has a strategy for

automation

• There is a tool to

automate all steps • Models used during

the development process may be used during tests

Chang1996

• ADL Modeling

• Software modeling using ADL

Specifications, Boolean conditions and test data descriptions

• This approach supports the test execution for C Programs

• There is a tool to

automate all steps

Dalal1999

• AETGSpec Modeling • Equivalence Class Method

• Software (Test data) modeling

using AETGSpec

• There is a tool to

automate all steps

Gargantini1999

• SCR Specifications Modeling

• Software modeling using SCR

Specification • The approach assumes that the

customers have formulated a set of systems properties previously

• A translation from SCR Model into SMV or SPIN model using a model checker

• The incompleteness

of the generated test sequences

• Currently, the approach constructs only ONE test sequence for each branch.

• There is a tool to automate all steps

Hartmann2000

• UML Modeling • Category-partition method

• Software modeling using some

UML Diagrams • All steps has a strategy for

automation • Test case generation produces

test cases in TSL format, but this approach also allows the test execution for components based on COM/DCOM and CORBA middleware.

• Rational Rose (proprietary tool) is used for software modeling

Kim1999

• Software modeling using UML

• There is no tool to

Page 54: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 54

• UML Modeling state diagram • All steps has a strategy for

automation

support, then, all steps must be performed manually

Mingsong2006

• UML Modeling • Java Programming

• Software modeling using UML

activity diagram • Almost steps are non-automated • Code instrumentation is

necessary to apply this MBT approach

• There is a tool to

automate all steps • This approach is

must be applied after the implementation, then the models cannot be used during the development

Offutt1999b

• UML Modeling

• Software modeling using UML

state diagram • All steps has a strategy for

automation

• Rational Rose

(proprietary tool) is used for software modeling

• The results obtained on a empirical study show the

effectiveness of this approach for test coverage

Offutt2003

• UML Modeling

• Software modeling using UML

state diagram • Almost all steps have a strategy

for automation. The test scripts generation is not automated.

• The results obtained

on a case study show the effectiveness of this approach for test case generation, faults found and test coverage

Parissis1996

• LUSTRE Modeling

• According with the authors,

software modeling using LUSTRE is a hard task

• All steps has a strategy for automation

• There is a tool to

automate all steps

Richardson1992

• Reactive System Modeling

(including temporal requirements)

• Z Language

• Two steps in the test generation

process must be performed manually

• There is no information about how to perform the non-automated steps

• There is no tool to

support, then, all steps must be performed manually

Stobie2005a

• Finite State Machine (FSM)

Modeling • Abstract State Machine

Language (ASML) Modeling

• Modeling the software behavior

using ASML • All steps has a strategy for

automation

• There is a tool to

automate all steps • Models used during

the development process may be used during tests

Stocks1996

• Z Language Modeling

• Input and output space modeling

using Z notation • The steps for test case

• There is no tool to

support, then, all steps must be

Page 55: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 55

generation are not described performed manually

Vieira2006

• UML Modeling • Category-partition method

• Software modeling using a

specific UML Modeling Tool and test generator

• All steps has a strategy for automation

• There is a tool to

automate all steps • Models used during

the development process may be used during tests

All approaches need a basic understanding of Modeling Language. MBT

approaches are easier for Software Engineering professionals having UML training from

university. Other languages, cited in the Table 14 may need some basic training.

The effort to use each an approach depends on the complexity of all manual steps

(initial modeling or generation). Most approaches need modeling (low complexity).

However, some don’t describe how to perform the non-automated steps, or describe the

steps as hard.

The cost to apply each approach defines its impacts (for the schedule or budget)

during the software development process. For example, one tool decrease the cost to

apply one approach, but the using of a proprietary tool could make this approach less

practical. Moreover, sometimes the benefits to use one approach are low because the

output generated is not precise or incomplete, and consequently, the cost is high.

3.2.3 Changing the paradigm of MBT and the testing criteria

The Testing Coverage Criteria is divided in two types, and both of them are

essential to generate test cases. The types are: control-flow and data-flow. We can

observe that just some approaches use both types to generate test cases, and others

just one type. Table 15 describes the analysis about the type of testing coverage criteria

used by MBT approaches. The types can be: Data-flow, Control-flow, or Both (control-

flow and data-flow).

Both (data-flow and control-flow) are important during the test generation process.

However, one other characteristic extracted from these approaches is that previously

(this behavior can be observed on the approaches published approximately until 1999),

the focus was on the data-flow based testing coverage, that is, either the approach uses

data-flow, like (CHANG et al., 1996), (PARISSIS and OUABDESSELAM, 1996), (DALAL

Page 56: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 56

et al., 1999) or use both (data-flow and control-flow). The reason for that is that the

languages/technique used for software modeling were previously limited and focused on

black-box technique, that consider software as a closed function when the data as

inputted and a specific results are expected. Moreover, the systems were considered

simple functions or a small module.

Table 15. Qualitative analysis about the type of testing coverage criteria

Approach Publication Year Testing Coverage Type

Richardson1992 1992 Both: data-flow (equivalence class) and control-flow (paths)

Chang1996 1996 Data-flow: from test data description

Parissis1996 1996 Data-flow: it is used a random test data generator

Stocks1996 1996 Both: data-flow and control-flow

Dalal1999 1999 Data-flow: equivalence class

Gargantini1999 1999 Both: data-flow (data boundary) and control-flow (branch coverage)

Kim1999 1999 Both: data-flow and control-flow

Offutt1999b 1999 Control-flow (this approach is used for TEST DATA generation)

Abdurazik2000 2000 Both: data-flow and control-flow

Hartmann2000 2000 Both: data-flow and control-flow

Briand2002 2002 Control-flow: path extracted from activity diagrams

Offutt2003 2003 Both: data-flow and control-flow

Stobie2005a 2005 Control-flow: path extracted from activity diagrams

Mingsong2006 2006 Control-flow: path extracted from activity diagrams

Vieira2006 2006 Both: data-flow (category-partition) and control-flow (activity diagram)

More recently, the focus changed, and most approaches are working, frequently,

with the structure of the system (control-flow based testing coverage) for test case

generation, like (BRIAND and LABICHE, 2002), (MINGSONG et al., 2006), (STOBIE,

2005). This happened because the languages/techniques developed recently are able to

describe structural information about the software behavior, making easy the analysis of

software control-flow. Moreover, the system’s complexity is increasing each year, and

it’s necessary to evaluate not just small modules, but full systems. If we analyze all

paper included in the 1st and 2nd Priority Level, this behavior can be visualized easier

than looking at just the selected 15 papers.

This fact created a problem, because currently we need to use both (data and

control) criteria to generate test cases. Therefore, the main question to be answered by

the approaches is: How to use both? Which model to use to represents each one? Can I

Page 57: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 57

use only one model to describe all (because if I have more models to describe, the cost

and effort to apply this approach increase)?

3.2.4 Positive and Negative aspects from MBT approaches

Each MBT approach has specific characteristics that make it different from other

approaches and cannot be used to define if one approach is better than other approach

or not. In different contexts, each characteristic may be interesting or not. For example, if

one approach uses the model X and other approach uses the model Y for test case

generation, it is impossible to state that one approach is better than the other one.

These specifics characteristics were described and analyzed during this section.

In this last section, we present some positive and negative aspects from each MBT

approach. This information cannot be used to compare them, but may help us during the

selection or classification of one approach to be used in a specific project with specific

characteristics.

Table 16 described the positive and negative aspects for each MBT approach

select for qualitative analysis.

Table 16. Positive and Negative aspects of MBT approaches

Approaches Positive Aspects Negative Aspects

Abdurazik2000

- All process may be executed automatically;

- This approach introduces an algorithm for code instrumentation of a program that is implemented from collaboration diagrams;

- This approach included techniques to be used for statically and dynamically;

- There is not tool built to support this approach; - The authors describe the intention to integrate

eventual built tool to support this approach with Rational Rose, a proprietary tool;

- This approach doesn't generate test cases, but only test data;

- There is not information about a case study using this approach;

- Collaboration diagram doesn’t define timing information and parallelism between events;

Briand2002

- All process is executed automatically using TOTEM;

- The authors discuss about the use of non-functional requirement to generate test cases as future task;

- The models used are extracted from artifacts produced during the software development process (analysis phase);

- This research is in progress, and new approaches may be developed for MBT;

- The approach hasn't been evaluated on the field;

Chang1996

- All process may be executed automatically;

- There is a strategy to selection the tests (filtering);

- Applied just for C programs; - There is no information about how to define

the testing coverage criteria; - There is no information about how to define

the test data; - The authors describes that this technique

Page 58: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 58

complements code-based techniques, but the idea of MBT is to start testing before the definition of the code, using a model describing the software behavior;

Dalal1999

- The tight coupling of the tests to the requirements;

- The ease with testers can write the data model;

- The ability to regenerate tests rapidly in response to changes;

- There are a lot of hints about MBT in general;

- There are a lot of information about future research regarding MBT;

- It works just with test data. The control flow is not considered;

- There is no an oracle to store the test cases. - The demand for development skills from

testers to use this approach; - This approach is applied for system of low

complexity (test's response to a stimulus is relatively low);

Gargantini1999

- This approach uses model checking to generate test sequences from counterexample. It's an interesting solution;

- This approach uses control and data coverage to generate test sequences;

- The process is automated using the developed tool;

- The papers describes that this approach generates test suite to cover all error. It's impossible to aim that;

- The authors said that the method assumes that the customers have formulated a set of systems properties (it is not a normal step during the requirements specification);

- The incompleteness of the test sequences; - The method assumes the correctness of both

the operational specification and properties;

Hartmann2000

- The process is automated using the developed tool;

- This approach defines test cases from the data-flow and control-flow using different criteria;

- This approach generates test scripts in TSL format (may be executed in several testing execution tools);

- This approach is able to execute the test for components based on COM- and CORBA.

- This approach is not applied for asynchronous communication, because it requires the modeling of these queued messages and events.

- The internal data conditions of these state machines (meaning the global state machine variables) influencing the transition behavior are not supported.

- Concurrent states are also not supported as yet. - This approach uses Rational Rose (proprietary

tool).

Kim1999

- This approach uses control flow and data flow to generate test case, differently of some approaches that use only one of these (either generate test data or generate paths);

- All steps may be executed automatically. The steps are described on the paper;

- Applied to test the software static behavior. - There is no information about the type of

inputs and outputs used by that; - There is no information about the efficiency of

this approach in a real project;

Mingsong2006

- The process is automated using the developed tool;

- This approach is able to generate test cases and EXECUTE the tests for Java programs supporting all testing process, including testing analysis;

- The approach is applied only for JAVA programs;

- There is not a case study to describe the use of the approach in a real project;

- The test data are generated randomly (no using a test data generation criteria). Just the control-flow uses testing criteria;

- Code instrumentation in Java programs is necessary to apply this approach;

Offutt1999b

- The process is fully automated using the developed tool;

- This is the first MBT approach based on UML identified on the technical literature;

- Empirical results from using UMLTest to evaluate the testing criteria were given. The results indicate that highly effective tests can be automatically generated for system level testing;

- This approach uses Rational Rose (proprietary tool);

- The current tool requires all variables to be Boolean. Other types are not supported;

- Test data are defined by parameters of events, without use a data-flow strategy;

Offutt2003 - There is a tool to support this approach; - This approach has 4 testing criteria level

- There are 2 hard non-automated steps to be performed using this approach;

Page 59: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 59

to be applied; - Empirical results shows the effectiveness

of this approach about number of test cases, faults found, faults missed, coverage for each testing criteria;

- Test data are defined by parameters of events, without use a data-flow strategy;

- The SpecTest tool parses specifications into a general format (the specification graph) using algebraic predicate representation, and currently there is one restriction that it processes only one mode class for SCR specifications at a time, and one statechart class for UML specifications.

Parissis1996

- It works with Non-functional requirements (security properties) to generate test data;

- It allows for fully automated test data generation consistent with the environment assumptions;

- Test data generation ignores test data which are of no interest for the testing process;

- Provide a formal framework for testing safety-critical software;

- It works just with test data, no control flow; - Specifying the software environment by means

of invariant properties is a rather difficult task. It's necessary to learn how to use LUSTRE before to use this approach;

Richardson1992

- The use of Testing Oracle may allow exhaustive testing, because the subset of output is defined;

- Regression Testing may be accomplished because the Testing Oracle is defined;

- Temporal Requirements are used to generate test oracles, because this information is essential for reactive systems;

- There is no information about the input and output used by the approach;

- There is no information about tools to support; - This approach can hardly ever be automated

because it uses a non executable specification language;

Stobie2005a

- There are two tools to support the test case generation;

- There are 6 options for testing coverage. The tester may choose which one to use in your project;

- The tester may use Assertions (constraints) for model checking

- The outputs are generated in XML format; - The quality of the approach is described

on the test by the increasing of the coverage and number of test cases, reducing of time and effort in comparison with manual test design;

- The cost of developing a model is high (the authors said);

- The learning curves to use this approach is a little high;

- The tools used are perceived as too expensive or too complex by most testers;

- This approach uses just Microsoft's Technologies;

- This approach work just with control flow (no test data);

Stocks1996

- The papers describes about the importance of control and data criteria to generate test cases. The most of times, just one of that is used by the approaches;

- The Test Templates are generated like a tree, from a valid input to other inputs changing the initial state;

- There is no information about the steps to be accomplished to generate test cases;

- There is no information about tool to support; - There is no information about the model used

for input and about the outputs;

Vieira2006

- The process is automated using the developed tool;

- Almost models used are extracted from artifacts produced during the software development process (analysis phase);

- This approach defines test cases from the data-flow and control-flow using different criteria;

- This approach generates test scripts in TSL format (may be executed in several testing execution tools);

- The paper describes the positive results (reduction of effort, test cases, cost and time) about the use of this approach in a project.

- This approach is applied just for GUI Testing; - The modeling of category and values using

class diagrams is necessary to apply the category-partition method;

Page 60: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 60

4 Towards new research regarding MBT

After the evaluation of MBT approaches, some insights were observed that could

be future research topics in MBT. These observations can be divided into:

1. Important aspects to be considered during the development or application of a

new MBT approach. These aspects are discussed in section 4.1.

2. Potential future research topics in MBT. Three topics for new research are

introduced, motivated, discussed, and some examples are presented.

Subsequently, for each topic, preliminary guidelines about how to develop the

research and what are the expected results are presented. This information is

showed in section 4.2.

4.1 Observations for application or development of a MBT approach

The observations extracted from 74 papers describe important aspects to be

considered during the application or development of a MBT approach. They are

enclosed in boxes in this section. The first box discusses observations to be considered

during the development of a new MBT approach, and the second box discusses

observations related to the selection of a MBT approach for a project.

4.1.1 The model to describe the software behavior is a key point

The behavior model must be developed carefully. Its limitation and the restrictions

must be respected during the modeling of the software under test. Moreover, the

correctness of a model is fundamental to start the test case generation process. If the

model is wrong, the generated tests will be invalid.

Some approaches present a model-checker integrated with the test case

generation process. This is an interesting solution, but usually the checker evaluates just

the syntax of the model, not the semantic. Errors from modeling may still exist.

Another aspect is the language/notation used for modeling. UML is more common

than other languages. The expressive differences among the languages determine the

effectiveness of test case generation process. Important advantages of UML are:

Page 61: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 61

• UML is an international standard for object-oriented software modeling defined by

OMG (Object Management Group);

• UML have been used more that other languages;

• Software Engineering professionals have UML basic training in the university.

This makes it easy to apply UML based MBT approaches;

• There are a lot of tools available to support software modeling using UML.

On Intermediate models, we observe that the translation from a model into another

model may introduce a new source of error, and the strategy to support the translation

must be defined. If this task is done automatically, the possibility of mistakes is lower

than if it is done by hand.

While choosing the behavior model to be used in a MBT approach, we have to analyze

its limitations, and what it can or cannot represent. This information may limit the use of

this MBT approach.

Check if the behavior model allows modeling of our system and if there are intermediate

models to be developed. Assure the correctness of the model before to perform the test

case generation using a checker or another solution.

4.1.2 Testing coverage must address control-flow and data-flow criteria

Testing coverage criteria define the set of test cases selected for testing. Test

cases can be generated from two sources: control-flow (structure or path extracted from

the software) or data-flow (test data to be input during the test execution) criteria. In the

case of testing applied for large system is essential to combine both criteria to increase

the testing coverage. Usually, a test execution strategy is composed of test procedures

(describing the sequence to be followed from the control-flow) and test cases (describing

the test data to be input from the data-flow).

If just one of these two criteria were used, the MBT approach needs either test data

or test procedures to be defined manually.

Page 62: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 62

While defining the testing coverage criteria, we should try to combine control-flow and

data-flow in test case generation to increase the coverage. Otherwise, the approach has

limitation.

Check if the set of test cases generated by these criteria supplied by this approach is

either insufficient, excessive, or enough to test our system.

4.1.3 The level of automation must be high

Level of automation is one important factor to define the feasibility to use a MBT

approach. Usually, one MBT approach has only one non-automated step: the modeling

of software behavior used as input to test case generation process. Other non-

automated steps increase the effort, cost, and complexity to use the approach.

Non-automated steps have different complexity levels. The definition of the level

depends on the language/notation used for modeling, and algorithms or choices

required to perform a step. If a MBT approach has a high level of automation, the

complexity, effort, and cost for application are low.

Try to develop a MBT approach with high level of automation. We need to create (or

use) algorithms to support each steps, and describe guidelines to perform them. If there

are non-automated steps, we should try to reduce their complexity level.

Check if the automation level is high, and if we are prepared to perform the non-

automated steps that compose the test case generation process. Sometimes the

complexity of these steps is high and requires special resource or skills.

4.1.4 Tool supporting the automation of the steps is fundamental

Tools must support the execution of automatic steps. However, if there is not a tool

available to support the steps, this task may be hard and unfeasible (for example, to

Page 63: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 63

apply an algorithm to generate test cases manually is not trivial). A tool must supply the

integration between non-automated (usually modeling) and automated (usually

generation) steps.

The importance of the integration between automated and manual steps can be

observed in the following example: in one approach, the test case generation if

performed automatically by an available tool. However, the model used as input for the

testing process is described on a text or XML file developed manually because there is

not tool to support the modeling. The cost and effort of this task can damage the use of

a MBT approach.

Proprietary tools often are stable and feature-rich, but may require a license fee

limiting its use in some organization.

Construct a tool to make easier the application of this MBT approach. An important

aspect is to supply supporting for the integration between manual and automatic steps.

Moreover, analyze the necessity to use proprietary tools to perform any steps in the

testing process. These tools have, usually, a short life time and may be less usability.

Check if the MBT approach supplies tools to support the testing process, preferably all

steps. Verify if there is no additional cost to use these tools, e.g. license fee. If these

tools don’t support all steps, analyze the effort, cost and complexity to perform this task

manually.

4.1.5 The skill and complexity to use any approach must be understood for all stakeholders

The skill required for testing is changing year-by-year. Previously, testers needed

to learn just testing techniques to perform testing activities. Currently, testers need to

know testing techniques, modeling languages and sometimes programming languages.

Each MBT approach requires from the tester expertise about the notation used for

software modeling, and sometimes expertise about testing criteria, testing metrics, or

languages to generate test scripts.

Page 64: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 64

The first step to apply one MBT approach in loco must be an explanation for all

stakeholders about the process for test case generation, the inputs and outputs, testing

criteria, and level of automation. This is fundamental because one MBT approach may

involve different roles in the software development process that may impact the current

activities. For example: the model used as input for test case generation may be

generated by the designer, and there are some constraints required on the model to

apply the MBT approach. Then, the designer need to know how to do this task and what

the importance of his/her is task for the test case generation process.

Construct basic training material during the development of the approach to be used to

support training of stakeholders and its implantation on the field. To include a MBT as

part of the software development process may to help to decrease effort, cost, and time

for a project, but requires a commitment of all project team, not just the testing team.

Analyze the skill required to apply a MBT approach, and if there are qualified

professionals to work with this approach in our organization. Else, analyze the cost of

training, if is necessary to involve other stakeholders (analysts, designers, programmers)

and if these stakeholders can and are disposed to perform this new tasks. Expressive

change in the current tasks may make difficult the implantation of a MBT approach. The

ideal solution is to introduce low changes in the current tasks.

4.1.6 The order of steps to be followed while using a MBT approach

Each approach defines some steps for test case generation process. It’s necessary

to know about them, their pre-requirement, pos-condition, and constraints. The steps

must be performed in the same sequence as defined originally by the authors.

Otherwise, wrong outputs may be generated.

Create easy steps, with well-defined inputs and outputs. Moreover, try to automate them

by algorithms or tools.

Page 65: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 65

Analyze the steps that compose the approach. Check if our software project generates

the requirements to start this approach, and observe constraints for each intermediate

step. Usually, MBT approaches have a strict process to be followed that may require

changes in the current software testing process in a software organization.

4.1.7 Technology transfer

MBT Technologies transfer from academic environment to be implemented in

industrial projects needs to be analyzed carefully before to be performed. Several issues

must be analyzed, which are presented in this section. Risks and impacts must also be

prevented. The researchers must support the transfer by experimental and controlled

studies to adapt the approach to the industrial environments characteristics.

Execute experimental studies in different levels to evaluate the performance of a MBT

approach in different scenarios, like in virtuo, in vitro, in vivo experiments (TRAVASSOS

and BARROS, 2003). These studies help in the characterization, quality analysis, and

observation of limitations and context of a MBT approach. Moreover, develop training

material (like slides, demos, or tools manual) to support this task.

Analyze the existence of experimental results evaluating a MBT approach, or

publications based on the application on the field. This information may help to reduce

risks to apply a new approach in an organization.

Apply this approach firstly for small and controlled projects before to use it in large

and complex projects. This is useful to familiarize stakeholders with the new testing

process, and may allow using the current and new testing approaches together for

comparison.

Ask for support of the researcher to transfer an approach to real projects. To apply

a new MBT approach in a software project without supporting from its developers is an

impracticable task.

Page 66: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 66

4.2 Future research topics in Model-based Testing

Three open questions from current research are suggested for future work.

In this section, we discuss there, their motivation, feasibility, examples, and

expected results. The questions are:

1. What are the contexts (testing levels, software domains, etc) in which Model-

based Testing strategies have not been applied? And why?

• What are the limitations about the model-based testing approaches

described on the technical literature? How to avoid them?

2. How are non-functional requirements descriptions used for test cases

generation?

• How can a model describe software in order to efficiently generate test

cases for non-functional requirements evaluation?

• How to deal with different types of non-functional requirements?

3. What are the defects types not detected by test cases generated from specific

model?

• How to establish the relation between defects and their cause (phase,

model, or artifact)?

• Which types of defects continue to occur after the software deployment

that was not detected by MBT (e.g. Domain application error,

Inconsistency, Omission)?

• Which types of defects are more efficiently detected by MBT?

4.2.1 What contexts (testing levels, software categories) have Model-based Testing strategies not been applied? Why?

Traditional testing techniques (not based on models) are published on the technical

literature for several software domains and testing levels. However, MBT approaches

have not been frequently applied for some contexts (Figure 10).

This research must investigate these contexts that MBT approaches have not been

applied and analyze what are the reasons/characteristics/limitations about these context

that prevent the using of MBT approaches.

Page 67: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 67

Figure 10. MBT Approaches have not been applied for specific context.

After this analysis, we may be able to create solution to test these software

domains using a MBT strategy in different levels.

From these questions, we have defined an abstract about this research.

� ABSTRACT:

This research analyzes the contexts in which MBT has not been frequently applied.

Subsequently, we may develop a strategy to define (or adapt) one approach to be

applied in one of these contexts based on its specific characteristics. To perform that,

we need to know what characteristics are important in a MBT approach, and how to

relate the MBT characteristics with the specific characteristics of the software domain

and the testing level that the approach to be developed will be applied.

We have extracted some issues from the obtained results that may be used as

starting point for the research that evaluates MBT for different contexts. The issues are

presented below:

� ISSUES:

- How to create a single MBT approach for use at different testing levels during the

entire software development process?

- How to reduce the test case set without decreasing the testing coverage (i.e.

eliminating redundant test cases)?

Manual Testing Approaches

Context

Product Line

COTS

Object-oriented Software

Reactive System

Web Application

Embedded System

?

?

MBT Approaches

Page 68: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 68

- What are the main characteristics of software domains that MBT approaches have

not been frequently applied to? These domains include Product Line Software,

COTS, Web Service, and Web application). How to test applications in these

software domains using a MBT approach?

- What models may be used to describe the specific characteristics for the above

domains? What are testing coverage criteria to be applied for applications in each of

the above domains?

- Why tool usage is for MBT based Integration Testing low? And how increase MBT

tools for Integration Testing?

These issues have similar purposes. Therefore, the guidelines towards a solution

for them could be generalized in some steps:

� GUIDELINES:

- Observe the mains characteristics for each context where MBT has not been

frequently applied yet. For example:

o What are the specific characteristics of Web Application?

o How to model a Web Application?

o How to test a Web Application?

- Propose or identify an interesting method or approach to test an application in this

context. For example:

o Identification of approach(es) to test web application;

- Define how to use an approach based on model to test an application in this context.

For example:

o Creation or adaptation of a Web Application Testing Technique for MBT.

- Execution of Experimental Studies to evaluate the effectiveness of this solution for

different characteristics, like effort, time, coverage, cost.

o Controlled Experimental and On the Field Studies;

Page 69: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 69

4.2.2 How are non-functional requirements descriptions used for test cases generation?

Software Requirements Specification document consists of Functional (Figure 11A)

and Non-functional (Figure 11B) requirements (IEEE 830, 1998). Functional

requirements describe the software components or modules to be developed during the

software design and coding phases (Figure 11C) and to be used to test case generation

for system or integration level testing (Figure 11D). Non-functional requirements are

essentials to define architectural aspects and constraints to be addressed during the

software development (Figure 11E). Both need to be modeled and tested.

However, the majority of MBT approaches use only functional requirements during

test case generation process. Non-functional requirement descriptions have not been

frequently used for test case generation (Figure 11F). This information was analyzed in

the section 3.1.2.

Figure 11. Usability of Functional and Non-functional Requirements in the software development and MBT process.

We propose to investigate test in different types of non-functional requirements by

MBT. The main question is how to generate test cases from non-functional requirements

description?

This research need to define a classification of non-functional requirements, using

ISO-9126 (2001), for example, and propose a strategy to testing different types of non-

Software Specification (Requirements)

Test Cases (System or Integration

Testing)

Software Design

Development Process

Test Process

Functional Requirements

Non-functional Requirements

(E)

(A)

(B)

(C)

(F)

(D)

Page 70: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 70

functional requirements for different contexts (non-functional requirements have different

behavior for different contexts).

From these questions, we have defined an abstract about this research.

� ABSTRACT:

This research discusses about the developing of an approach for test cases generation

for different type of non-functional requirements for different software domain (according

with some standard, like ISO or IEEE. E.g.: usability, efficiency, performance, security,

etc).

Some issues based on to use non-functional requirements descriptions for MBT

have been extracted. These issues are presented below:

� ISSUES:

- How to classify non-functional requirement for a specific software domain?

- How to model quality attributes for application in a specific software domain?

- Which model to use for non-functional requirement behavior modeling?

- How to measure the testing coverage about non-functional requirement?

- Which type of non-functional requirement can be tested by MBT approaches?

(Eventually, some attributes cannot be modeled).

Using ISO-9126 (2001) as standard to classify non-functional requirement (as

described on the section 3.1.2), we have defined some examples of non-functional

requirements for different software domain, as described below:

� EXAMPLES OF NON-FUNCTIONAL REQUIREMENTS (ISO 9126)

- Usability

o Are the defined styles being applied in a web application?

o What was the defined screen configuration? (480x600; 800x600; 1024x768). Is

the correct configuration being used?

- Reliability

o Is the software running 24h a day, 7 days a week?

o Has the software a mechanism to treat fault tolerance?

- Efficiency

Page 71: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 71

Does the software response to one action in the time defined on the

requirements?

- Portability

o Can the software (web) be used on MS Explorer and Firefox?

- Security

o Is the software prepared to avoid attacks like “sql-injection”?

o One web application must not transport information in the URL (using GET. Post

must be used). Is the software in accordance with this constraint?

- Maintainability

o Model-based Testing Approaches support the software maintenance (regression

testing), but how to evaluate its maintainability?

These issues are integrated with the purpose of make able the using of non-

functional requirements for a MBT solution. Therefore, the guidelines towards this

solution could be described in some steps:

� GUIDELINES:

- Define the context that non-functional requirements descriptions will be used for

MBT. For example:

o Which software domain? Web Application, Embedded System, etc.

o Which testing level? System Testing, Integration Testing, etc.

- How to classify non-functional requirements in the selected software domain? For

example:

o How to classify types of non-functional requirements for Web Application?

- How to test non-functional requirement for the selected software domain? For

example:

o How to test different types of non-functional requirement for Web Application?

- Which model to use for non-functional requirements modeling? For example:

o How to model a specific type of non-functional requirement for Web Application?

- Propose a method to test types of non-functional requirements for a software

domain. For example:

o Define a MBT approach to test different types of non-functional requirements for

Page 72: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 72

web application;

- Execution of Experimental Studies to evaluate the effectiveness of this solution

based on different characteristics, like effort, time, coverage, and cost.

o Controlled Experimental and On the Field Studies;

SIEMENS had developed a Usability approach to checking if one web application

has been developed according to the SIEMENS’ styles guide using WebChecker.

4.2.3 What are the defects types not detected by the test cases generated from a specific behavior model?

Tests can show failures in the software, but not the absence of failures. The

defects could remain in the software after the tests.

Software development is a human-task and consequently errors may be made

during this process (HOWDEN, 1987). Testing try to show fails in a product to support

the elimination of the defects that start these fails. However, to reach 100% of testing

coverage is a hard and impracticable task, since we had to test the universe of all

possible inputs (Figure 12).

Figure 12. Behavior of testing coverage criteria applied to try to discover defects in software.

Testing coverage criteria aim to define strategies to assure an acceptable quality

level for software without to reach 100% of coverage. However, eventually, some types

of defects don’t get to be discovered by a testing approach. This problem is more

frequently in approaches based on models, since that they are dependent on the model

DEFECT1

DEFECT3

DEFECT5

DEFECT4

DEFECT2

DEFECT7

DEFECT8

DEFECT9

DEFECT6

DEFECT10

DEFECT11 DEFECT12

DEFECT14

DEFECT13

TESTING COVERAGE

Page 73: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 73

used to describe the software behavior, and sometimes these models have limited

testing coverage.

We want to discover types of defects not detected by MBT approaches and to

propose improvements on the testing process to avoid these defect types in future

projects. This solution may be using a MBT solution or not. The goal is to increase

testing coverage, and also eventually decrease effort, time, and cost to use it.

To perform this task, we need to know results based on projects that MBT

approaches have been applied. To do this, we should apply a causal-analysis and

resolution approach (CARD, 2005) to establish the relation between defects and their

cause (e.g.: the phase, model or artifact that the defect was discovered)? Subsequently,

new strategies may be defined to avoid this type of defects in future projects, using a

MBT approach or not.

From these questions, we have defined an abstract about this research.

� ABSTRACT:

This research discusses about the improvement of a MBT approach or just a testing

strategy using information extracted from software projects. We need to apply a causal-

analysis and resolution approach to establish the relation between defects and their

cause in an organization and to evaluate the application of a MBT approach in a real

project. Examples of information to be analyzed from the projects results are:

• In which phase these defects have been included on the software artifact.

• What type of information (or behavior) the models are not able to represent.

• Which types of defects are not found by MBT approaches.

From this information, we can purpose a new strategy to improve the testing process

and to avoid these types of defects not found by MBT approaches in future projects.

Issues based on this research topic have been extracted. These issues are

presented below:

� ISSUES:

- What were the types of defects detected using a MBT approach?

Page 74: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 74

- What were the types of defects not detected using a MBT approach, but detected

after the deployment?

- How to extract metrics from the software development process about the MBT

approach used during it?

- How is the defects distribution per phase or per type of defects?

- What are the origins of these defects? (Limitations of the model? Testing criteria?

Tool? Testing process? Modeling error?)

- How to avoid these types of defects in future projects using a MBT approach or other

strategy to support the testing?

These issues involve a strategy to support the evaluation and improvement of MBT

approaches. Other purpose is to support the technology transfer from academic

environment to industry. The guidelines towards this solution could be described in

some steps:

� GUIDELINES:

- Define metrics to be extracted from a software development projects about MBT

approaches. For example:

o What is the testing coverage?

o How many defects were detected using the MBT approach?

o How many defects were detected not using the MBT approach?

- Collect information about MBT approaches in software projects to be analyzed. For

example:

o Are there empirical results describing metrics based on the quality and application

of MBT approaches available on the technical literature?

- Develop a causal-analysis and resolution approach to know the origin of these

defects in different approaches. For example:

o How to define the origin of a defect from the project data?

o How to propose a resolution for an identified problem to avoid it in future

projects?

- Analyze the types of defect usually not identified by MBT approaches and propose a

Page 75: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 75

solution (based on model or not) to avoid them. For example:

o How to identify Software Domain error using a MBT approach?

o How to identify Inconsistence error between parts of the Software Requirement

using a MBT approach?

o How to identify Logic error using a MBT approach?

- Execution of Experimental Studies to evaluate the improvement of the new

developed strategy.

o Controlled Experimental and On the Field Studies;

Page 76: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 76

5 Conclusions

5.1.1 Summary

This work presented a survey for characterization of MBT approaches described on

the technical literature using a Systematic Review protocol.

This survey was developed to capture the state-of-art of MBT approaches toward

future research regarding MBT. It described the mains information related to several

MBT approaches, and analyzed them to construct a body of knowledge of MBT. The

expected results of this work are:

• The identification of MBT approaches, and research groups working with this

field. This information will be very useful during the progress of this work.

• The identification of possible future research topics to be explored.

In the next sections, the contributions of this work are presented, some limitations

are discussed, and future works are described.

5.1.2 Contributions

In this section, the mains contributions obtained during this work are presented:

SYSTEMATIC REVIEW. The execution of a systematic review, mainly the planning

phase, when used as a tool on a bibliographical review allows a more complete

specification about what has to be searched on technical sources, facilitating the search

and improving the researcher’s focus during the read and analysis of the publications.

For this topic, the contributions are:

• Definition of a Systematic Review Protocol to guide the planning, execution and

analysis of the survey. Moreover, this protocol allows the repetition or re-

execution of this search by other researchers in different moments and source

(digital libraries). If this protocol were followed, the results can be integrated,

like a meta-analysis.

Page 77: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 77

• The using of JabRef Tool to support the papers characterization/filtering/analysis

and creation of a database with information about each paper collected. This

database is available with the authors of this work.

MBT APPROACHES ANALYSIS. The main contribution of this work is related to the

MBT field. More than 400 MBT papers have been found during this work (50% has been

excluded because they are not related to propose of this work and 20% has been

already analyzed) and the results have been presented in this document. For this topic,

the contributions are:

• The obtained results showing analysis for the main characteristics about MBT

approaches (like behavior model, testing coverage, automation, and limitations);

• The identification of some contexts that MBT has not been frequently applied;

• Description of observation based on MBT, supporting researcher, during the

development of new MBT approaches, and practitioners, during the selection of

a MBT to be applied in a software project.

• Motivations for new research regarding MBT. These proposals have been

described on the section 4.

5.1.3 Limitations

Some limitations are presents in this work and they are related to the planning of

survey, execution, and analysis. The identified limitations are:

• Only 9 sources of papers have been selected in this Systematic Review. We

cannot assure that the identified papers represent a high coverage of papers

regarding MBT approaches. The analysis performed cannot be generalized for

all MBT approaches, but just for the context of the identified papers. For

example, probably a lot papers published after 2004 have not been found

because they are not available on the sources used during this work.

• In 406 papers identified during this work, 57 are not available on Internet. These

paper had to be excluded of the characterization and analysis phases because

was impossible to extract their information.

Page 78: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 78

• The set of identified paper was large. Then, we have created three Priority

Levels to define the analysis order for the papers. However, just the Priority

Levels 1 and 2 have already been analyzed (20% of all papers). The papers in

the Priority Level 3 must be analyzed in other moment to complete this survey.

5.1.4 Future Work

Some tasks can be suggested as future activities with the purpose of to continue

this work. They are:

• Perform the characterization and analysis of the papers in Priority Level 3.

These papers have not been analyzed yet;

• Look for other papers sources to be used to identify new papers regarding MBT;

• Explore new research topics for future work;

• Select one (or more) of the topics proposed in the section 4 of this work and

explore it (or them) aiming a new MBT research for Master or PhD degree.

Page 79: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 79

6 Bibliography References

ABDURAZIK, A. & OFFUT, J. (2000), “Using UML Collaboration Diagrams for Static

Checking and Test Generation”, Proceedings of UML 2000 - The Unified Modeling

Language. Advancing the Standard. Third International Conference, York, UK,

October 2000, Springer, pp. 383--395.

ALI, S.; ur REHMAN, M.J.; BRIAND, L.C.; ASGHAR, H.; ZAFAR, Z.; NADEEM, A.

(2005), “A State-based Approach to Integration Testing for Object-Oriented

Programs”, Technical report, Carleton University.

AMLA, N. & AMMANN, P. (1992), “Using Z specifications in category partition testing”,

Proceedings of the COMPASS 7th Annual Conference on Computer Assurance

(COMPASS '92), Gaithersburg, MD, pp. 15--18.

AMMANN, P. & OFFUT, J. (1994), “Using formal methods to derive test frames in

category-partition testing”, Proceedings of the Ninth Annual Conference on

Computer Assurance (COMPASS '94) - Safety, Reliability, Fault Tolerance,

Concurrency and Real Time, Security, pp. 69--79.

ANDREWS, A.; OFFUT, J. & ALEXANDER, R. (2005), “Testing web applications by

modeling with FSMs”, Systems and Modeling, 4(3):326-345, July 2005.

BARBEY, S.; BUCHS, D. & PÉRAIRE, C. (1996), “A theory of specification-based

testing for object-oriented software”, Proceedings of the Second EDCC2 (European

Dependable Computing Conference), Taormina, Italy, October 1996.

BASANIERI, F. & BERTOLINO, A. (2000), “A Practical Approach to UML-based

Derivation of Integration Tests”, Proceedings of 4th International Software Quality

Week Europe.

BEIZER, B. (1990), Software testing techniques, 2nd ed., Van Nostrand Reinhold Co.,

New York, NY.

BELLENTINI, C.; MARCHETTO, A. & TRENTINI, A. (2005), “TestUml: user-metrics

driven web applications testing”, Proceedings of the 2005 ACM Symposium on

Page 80: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 80

Applied Computing (SAC'05), New Mexico, March, 2005, pp. 1694-1698, ISBN 1-

58113-964-0.

BERNARD, E.; BOUQUET, F.; CHARBONNIER, A.; LEGEARD, B.; PEUREUX, F.;

UTTING, M. & TORREBORRE, E. (2006), “Model-Based Testing from UML

Models”, Model-based Testing Workshop (MBT'2006), INFORMATIK'06, volume P-

94 of LNI, Lecture Notes in Informatics, Dresden, Germany, pp. 223--230, October

2006. Note: ISBN 978-3-88579-188-1.

BERTOLINO, A.; GNESI, S. (2003), “Use case-based testing of product lines”.

Proceedings of the ESEC / SIGSOFT FSE, pp. 355-358.

BERTOLINO, A.; MARCHETTI, E. & MUCCINI, H. (2005), “Introducing a Reasonably

Complete and Coherent Approach for Model-based Testing”, Electronic Notes in

Theoretical Computer Science 116( SPECISS), 85 - 97.

BERTOLINO, A.; MARCHETTI, E. & POLINI, A. (2003), “Integration of ‘components’ to

test software components”, Electronic Notes in Theoretical Computer Science

82(6), 49 - 59.

BEYER, M.; DULZ, W. & ZHEN, F. (2003), “Automated TTCN-3 test case generation by

means of UML sequence diagrams and Markov chains”, Proceedings of 12th Asian

Test Symposium (ATS 2003), pp. 102--105.

BIOLCHINI, J.; MIAN, P.G.; NATALI, A.C.; & TRAVASSOS, G.H. (2005), “Systematic

Review in Software Engineering: Relevance and Utility”, Technical Report ES-

679/05, PESC-COPPE/UFRJ. Available at http://www.cos.ufrj.br.

BIRD, D. L. & MUNOZ, C. U. (1983), “Automatic generation of random self-checking test

cases”, IBM Systems Journal, 22(3):229–245.

BOTASCHANJAN, J.; PISTER, M. & RUMPE, B. (2004), 'Testing agile requirements

models', Journal of Zhejiang University: Science 5( 5), 587 - 593.

du BOUSQUET, L.; OUABDESSELAM, F.; RICHIER, J. & ZUANON, N. (1999), “Lutess:

a specification-driven testing environment for synchronous software”, Proceedings

of the 1999 International Conference on Software Engineering (ICSE1999), pp.

267--276.

Page 81: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 81

(a) BRIAND, L.; LABICHE, Y. & CUI, J. (2004), “Towards Automated Support for

Deriving Test Data from UML Statecharts”, (SCE-03-13, Version 2), Technical

report, Carleton University.

(b) BRIAND, L.; LABICHE, Y. & LIN, Q. (2004), “Improving State-Based Coverage

Criteria Using Data Flow Information”, Technical report, Carleton University.

BRIAND, L.; LABICHE, Y. & SOCCAR, G. (2002), “Automating impact analysis and

regression test selection based on UML designs”, Proceedings of International

Conference on Software Maintenance (ICSM).

BRIAND, L.C.; LABICHE, Y. (2002), “A UML-Based Approach to System Testing”,

Technical report, Carleton University.

BRIAND, L.C.; LABICHE, Y. & SÓWKA, M.M. (2006), “Automated, contract-based user

testing of commercial-off-the-shelf components”, Proceeding of the 28th

international conference on Software engineering (ICSE '06), ACM Press.

BRIAND, L.C.; LABICHE, Y.; WANG, Y. (2002), “Revisiting Strategies for Ordering

Class Integration Testing in the Presence of Dependency Cycles – An Investigation

of Graph-based Class Integration Test Order Strategies”, Technical report,

Carleton University.

CARD, D. (2005), “Defect Analysis: Basic Techniques for Management and Learning”,

Advances in Computers, vol. 65, chapter 7, pp. 259-295.

CARPENTER, P.B. (1999), “Verification of requirements for safety-critical software”,

Proceedings of the 1999 annual ACM SIGAda international conference on Ada

(SIGAda '99), ACM Press.

CAVARRA, A.; CRICHTON, C. & DAVIES, J. (2003), “A method for the automatic

generation of test suites from object models”, Proceedings of the 2003 ACM

symposium on Applied computing.

CHANG, J. & RICHARDSON, D.J. (1999), “Structural specification-based testing:

automated support and experimental evaluation”, Proceedings of the 7th European

software engineering conference, Springer-Verlag, ACM Press, pp. 285-302.

Page 82: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 82

CHANG, J.; RICHARDSON, D.J. & SANKAR, S. (1996), “Structural specification-based

testing with ADL”, Proceedings of the 1996 International Symposium on Software

Testing and Analysis ISSTA, ACM Press.

CHEN, Y.; LIU, S. & NAGOYA, F. (2005), “An approach to integration testing based on

data flow specifications”, Lecture Notes in Computer Science 3407, 235 - 249.

CHEN, Y.; PROBERT, R.L. & SIMS, D.P. (2002), “Specification-based regression test

selection with risk analysis”, Proceedings of the 2002 conference of the Centre for

Advanced Studies on Collaborative research, IBM Press.

CHEVALLEY, P. & FOSSE, P.T. (2001), “Automated Generation of Statistical Test

Cases from UML State Diagrams”, COMPSAC '01: Proceedings of the 25th

International Computer Software and Applications Conference on Invigorating

Software Development, IEEE Computer Society, Washington, DC, USA, pp. 205--

214.

CHOW, T.S. (1978), “Testing software design modeled by finite-state machines”, IEEE

Trans. Software Eng., vol SE-4, pp. 178-187, Mar.

CRICHTON, C.; CAVARRA, A. & DAVIES, J. (2001), “Using UML for Automatic Test

Generation”, Proceedings of Automated Software Engineering (ASE).

DALAL, S.; JAIN, A.; KARUNANITHI, N.; LEATON, J. & LOTT, C. (1998), “Model-based

testing of a highly programmable system”, Proceedings of The Ninth International

Symposium on Software Reliability Engineering, pp. 174--179.

DALAL, S.; JAIN, A.; KARUNANITHI, N.; LEATON, J.; LOTT, C.; PATTON, G. &

HOROWITZ, B. (1999), “Model-based testing in practice”, Proceedings of the 1999

International Conference on Software Engineering (ICSE’99), May, pp. 285--294.

DENG, D.; SHEU, P. & WANG, T. (2004), “Model-based testing and maintenance”,

Proceedings of IEEE Sixth International Symposium on Multimedia Software

Engineering, pp. 278--285.

FRIEDMAN, G.; HARTMAN, A.; NAGIN, K. & SHIRAN, T. (2002), “Projected state

machine coverage for software testing”, Proceedings of the 2002 ACM SIGSOFT

international symposium on Software testing and analysis (ISSTA '02), ACM Press.

Page 83: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 83

GARGANTINI, A. & HEITMEYER, C. (1999), “Using model checking to generate tests

from requirements specifications”, Proceedings of the 7th European software

engineering conference held jointly with the 7th ACM SIGSOFT international

symposium on Foundations of software engineering (ESEC/FSE-7), Springer-

Verlag, ACM Press.

GAROUSI, V.; BRIAND, L.C. & LABICHE, Y. (2006), “Traffic-aware stress testing of

distributed systems based on UML models”, Proceeding of the 28th international

conference on Software engineering (ICSE '06), ACM Press.

GNESI, S.; LATELLA, D. & MASSINK, M. (2004), “Formal test-case generation for UML

statecharts”, Proceedings of Ninth IEEE International Conference on Engineering

Complex Computer Systems, pp. 75--84.

GROSS, H.; SCHIEFERDECKER, I. & DIN, G. (2005), “Model-Based Built-In Tests”,

Electronic Notes in Theoretical Computer Science 111(SPEC ISS), 161 - 182.

HARTMAN, A. & NAGIN, K. (2004), “The AGEDIS tools for model based testing”, ISSTA

2004 - Proceedings of the ACM SIGSOFT International Symposium on Software

Testing and Analysis, 129 - 132.

HARTMANN, J.; IMOBERDORF, C. & MEISINGER, M. (2000), “UML-based integration

testing”, Proceedings of ISSTA’2000, pp. 60-70, Aug.

HARTMANN, A.; NAGIN, K. (2004), “The AGEDIS tools for model based testing”,

Proceedings of the ACM SIGSOFT International Symposium on Software Testing

and Analysis (ISSTA 2004), 129 – 132.

HONG, H.S.; KIM, Y.G.; CHA, S.D.; BAE, D.H. & URAL, H. (2000), “A test sequence

selection method for statecharts”, SOFTWARE TESTING VERIFICATION &

RELIABILITY 10(4), 203--227.

HOWDEN, W.E. (1987), “Functional program testing and analysis”. Nova York, NY,

McGrall-Hill.

IEEE Recommended Practice for Software Requirements Specifications (1998). ANSI /

IEEE Std. 830-1998.

Page 84: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 84

International Organization for Standardization (2001). ISO/IEC Standard 9126: Software

Engineering -- Product Quality, part 1.

JURISTO, N.; MORENO, A.M.; VEGAS, S. (2004), “Reviewing 25 years of testing

technique experiments”. Empirical Software Engineering: An International Journal,

9(1), p. 7-44, March.

KANSOMKEAT, S. & RIVEPIBOON, W. (2003), “Automated-generating test case using

UML statechart diagrams”, Proceedings of the 2003 annual research conference of

the South African institute of computer scientists and information technologists on

Enablement through technology (SAICSIT '03), South African Institute for

Computer Scientists and Information Technologists.

KIM, Y.; HONG, H.; BAE, D. & CHA, S. (1999), “Test cases generation from UML state

diagrams”, Software, IEE Proceedings- [see also Software Engineering, IEE

Proceedings] 146(4), 187--192.

KITCHENHAM, B., “Procedures for Performing Systematic Review”, Joint Technical

Report Software Engineering Group, Department of Computer Science Keele

University, United King and Empirical Software Engineering, National ICT Australia

Ltd, Australia, 2004.

LEGEARD, B.; PEUREUX, F. & UTTING, M. (2004), “Controlling test case explosion in

test generation from B formal models”, Software Testing, Verification & Reliability',

John Wiley and Sons Ltd.

LINZHANG, W.; JIESONG, Y.; XIAOFENG, Y.; Jun, H.; XUANDONG, L. & GUOLIANG,

Z. (2004), “Generating test cases from UML activity diagram based on Gray-box

method”, Proceeding of 11th Asia-Pacific Software Engineering Conference

(APSEC, 2004), pp. 284--291.

LIUYING, L. & ZHICHANG, Q. (1999), “Test selection from UML Statecharts”,

Proceedings of Technology of Object-Oriented Languages and Systems

(TOOLS1999), pp. 273--279.

LUCIO, L.; PEDRO, L. & BUCHS, D. (2005), “A methodology and a framework for

model-based testing”, Lecture Notes in Computer Science 3475, 57 - 70.

Page 85: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 85

LUND, M.S. & STØLEN, K. (2006), “Deriving tests from UML 2.0 sequence diagrams

with neg and assert”, Proceedings of the 2006 international workshop on

Automation of software test (AST '06), ACM Press.

MANDRIOLI, D.; MORASCA, S. & MORZENTI, A. (1995), “Generating test cases for

real-time systems from logic specifications”, ACM Transactions on Computer

Systems (TOCS), ACM Press.

MYERS, G., “The Art of Software Testing”, New York: John Wiley, 1979.

MEYER, S. & SANDFOSS, R. (1998), “Applying Use-Case Methodology to SRE and

System Testing”, STAR West Conference, pp. 1-16.

MINGSONG, C.; XIAOKANG, Q. & XUANDONG, L. (2006), “Automatic test case

generation for UML activity diagrams”, Proceedings of the 2006 international

workshop on Automation of software test (AST '06), ACM Press.

MURTHY, P.V.; ANITHA, P.C.; MAHESH, M. & SUBRAMANYAN, R. (2006), “Test

ready UML statechart models”, Proceedings of the 2006 international workshop on

Scenarios and state machines: models, algorithms, and tools (SCESM '06).

NEBUT, C. & FLEUREY, F. (2006), “Automatic Test Generation: A Use Case Driven

Approach”, IEEE Transaction Software Engineering, IEEE Press, Piscataway, NJ,

USA, pp. 140--155.

OFFUTT, A.J. & LIU, S. (1999), “Generating test data from SOFL specifications”,

Journal of Systems and Software, 49(1), 49 - 62.

OFFUTT, J. & ABDURAZIK, A. (1999), “Generating tests from UML specifications”,

UML’99 Unified Modeling Language.

OFFUTT, J.; LIU, S.; ABDURAZIK, A. & AMMANN, P. (2003), “Generating Test Data

from State-Based Specifications”, Journal of Software Testing, Verification and

Reliability, John Wiley & Sons, Ltd, pp. 25-53.

OLIMPIEW, E.M. & GOMAA, H. (2005), “Model-based testing for applications derived

from software product lines”, Proceedings of the first international workshop on

Advances in model-based testing (A-MOST '05).

Page 86: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 86

OSTRAND, T. J. & BALCER, M. J. (1988), “The Category-Partition Method for

Specifying and Generating Functional Tests”, Communications of the ACM, 31(6),

June, pp. 676-686.

PARADKAR, A. (2004), “Plannable test selection criteria for FSMs extracted from

operational specifications”, 15th International Symposium on Software Reliability

Engineering (ISSRE 2004), pp. 173--184.

PARISSIS, I. & OUABDESSELAM, F. (1996), “Specification-based testing of

synchronous software”, Proceedings of the ACM SIGSOFT Symposium on the

Foundations of Software Engineering (21), 127 - 134.

PRASANNA, M.; SIVANANDAM, S.N.; VENKATESAN, R.; SUNDARRAJAN, R. (2005),

“Survey on Automatic Test Case Generation”, Academic Open Internet Journal,

Volume 15, available at http://www.acadjournal.com/2005/v15/part6/p4/.

(a) PRETSCHNER, A.; LOTZBEYER, H. & PHILIPPS, J. (2001), “Model based testing

in evolutionary software development”, Proceedings of the International Workshop

on Rapid System Prototyping, 155 - 160.

(b) PRETSCHNER, A.; SLOTOSCH, O.; AIGLSTORFER, E. & KRIEBEL, S. (2001),

“Model-based testing for real: the inhouse card case study”, Proceedings of

International Journal on Software Tools for Technology Transfer (STTT), Springer-

Verlag, pp. 140-57.

PRETSCHNER, A.; LOTZBEYER, H.; PHILIPPS, J. (2004), “Model based testing in

incremental system development”; Journal of Systems and Software, 70, pp. 315 –

329.

PRETSCHNER, A. (2005), “Model-based testing”, Proceedings of 27th International

Conference on Software Engineering, (ICSE 2005), pp. 722-723.

RAMAMOORTHY, C. V.; HO, S. F. & CHEN, W. T. (1976), “On the automated

generation of program test data”, IEEE Transactions on Software Engineering, SE-

2(4):293–300, Dec.

Page 87: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 87

RAPPS, S. & WEYUKER, E.J. (1982), “Data Flow analysis techniques for test data

selection”, In: International Conference on Software Engineering, pp. 272-278,

Tokio, Sep.

RICHARDSON, D.J.; LEIF AHA, S. & O'MALLEY, T.O. (1992), “Specification-based test

oracles for reactive systems”, Proceedings of International Conference on Software

Engineering (ICSE’1992), Vol. 14, 105 - 118.

RICHARDSON, D.J. & WOLF, A.L. (1996), “Software testing at the architectural level”,

Joint proceedings of the second international software architecture workshop

(ISAW-2) and international workshop on multiple perspectives in software

development (Viewpoints '96) on SIGSOFT '96 workshops, ACM Press.

RIEBISCH, M.; PHILIPPOW, I. & GOTZE, M. (2002), “UML-based statistical test case

generation”, International Conference NetObjectDays on Objects, Components,

Architectures, Services, and Applications for a Networked World, Springer-Verlag.

ROCHA, A. R. C., MALDONADO, J. C., WEBER, K. C., “Qualidade de software – Teoria

e prática”, Prentice Hall, São Paulo, 2001.

RUMPE, B. (2003), “Model-based testing of object-oriented systems”, FORMAL

METHODS FOR COMPONENTS AND OBJECTS 2852, 380--402.

SATPATHY, M.; LEUSCHEL, M. & BUTLER, M. (2005), “ProTest: An Automatic Test

Environment for B Specifications”, Electronic Notes in Theoretical Computer

Science 111 (SPEC ISS), 113 - 136.

SCHEETZ, M.; von MAYHAUSER, A.; FRANCE, R.; DAHLMAN, E. & HOWE, A.E.

(1999), “Generating Test Cases from an OO Model with an AI Planning System”,

Proceedings of the 10th International Symposium on Software Reliability

Engineering (ISSRE '99), IEEE Computer Society, Washington, DC, USA, pp. 250.

SINHA, A. & PARADKAR, A. (2006), “Model-based functional conformance testing of

web services operating on persistent data”, Proceedings of the 2006 workshop on

Testing, analysis, and verification of web services and applications (TAV-WEB '06),

ACM Press.

Page 88: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 88

SOKENOU, D. (2006), “Generating Test Sequences from UML Sequence Diagrams and

State Diagrams”, Informatik Forschung und Entwicklung, pp. 236-240.

STOBIE, K. (2005), “Model Based Testing in Practice at Microsoft”, Electronic Notes in

Theoretical Computer Science 111(SPEC ISS), 5-12.

(a) STOCKS, P.A. & CARRINGTON, D.A. (1993), “Test templates: a specification-

based testing framework”, Proceedings of the 15th international Conference on

Software Engineering (ICSE’1993), IEEE Computer Society Press.

(b) STOCKS, P. & CARRINGTON, D. (1993), “Test template framework: a

specification-based testing case study”, Proceedings of the 1993 ACM SIGSOFT

international symposium on Software testing and analysis (ISSTA '93), ACM Press.

STOCKS, P. & CARRINGTON, D. (1996), “A Framework for Specification-Based

Testing”, IEEE Transactions on Software Engineering, IEEE Press.

TAHAT, L.; VAYSBURG, B.; KOREL, B. & BADER, A. (2001), “Requirement-based

automated black-box test generation”, 25th Annual International Computer Software

and Applications Conference (COMPSAC 2001), pp. 489--495.

TAN, L.; SOKOLSKY, O. & LEE, I. (2004), “Specification-based testing with linear

temporal logic”, Proceedings of the 2004 IEEE International Conference on

Information Reuse and Integration (IRI 2004), pp. 493--498.

TRAORÉ, I. (2003), “A transition-based strategy for object-oriented software testing”,

Proceedings of the 2003 ACM symposium on Applied computing, ACM Press.

TRAVASSOS, G. H.; BARROS, M. O. (2003), “Contributions of In Virtuo and In Silico

Experiments for the Future of Empirical Studies in Software Engineering”. In: 2nd

Workshop in Workshop Series on Empirical Software Engineering: The Future of

Empirical Studies in Software Engineering, Roma. Proceedings of the WSESE03.

VIEIRA, M.; LEDUC, J.; HASLING, B.; SUBRAMANYAN, R. & KAZMEIER, J. (2006),

“Automation of GUI testing using a model-driven approach”, Proceedings of the

2006 international workshop on Automation of software test (AST '06), ACM Press.

Page 89: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Characterization of Model-based Software Testing Approaches Systematic Review

This document has been developed to be a joint SCR/COPPE Technical Report at PESC/COPPE/UFRJ -Brazil 89

WU, Y.; CHEN, M. & OFFUTT, J. (2003), “UML-Based Integration Testing for

Component-Based Software”, Proceedings of the Second International Conference

on COTS-Based Software Systems (ICCBSS '03), Springer-Verlag, London, UK,

pp. 251--260.

XU, W. & XU, D. (2006), “State-based testing of integration aspects”, Proceedings of the

2nd workshop on Testing aspect-oriented programs (WTAOP '06), ACM Press.

ZHEN, R.D.; GRABOWSKI, J.; NEUKIRCHEN, H. & PALS, H. (2004), “Model-based

testing with UML applied to a roaming algorithm for Bluetooth devices”, Journal of

Zhejiang University Science, pp. 1327-35.

Page 90: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

90

Appendix A – List of Identified Papers

Category

Title

Authors

Source

Year

B

A c

ase

for

test

-cod

e ge

nera

tion

in m

odel

-driv

en s

yste

ms

R

uthe

rfor

d an

d W

olf

GP

CE

20

03

B

A

Cho

ice

Rel

atio

n F

ram

ewor

k fo

r S

uppo

rtin

g C

ateg

ory-

Par

titio

n T

est

Cas

e G

ener

atio

n

Che

n et

al.

IEE

E T

rans

actio

ns o

n S

oftw

are

Eng

inee

ring

2003

E

A fl

exib

le e

nviro

nmen

t to

eval

uate

sta

te-b

ased

test

tech

niqu

es

Hie

rons

A

CM

SIG

SO

FT

Sof

twar

e E

ngin

eerin

g N

otes

20

04

B

A f

orm

al a

ppro

ach

to r

equi

rem

ents

bas

ed t

estin

g in

ope

n sy

stem

s st

anda

rds

Le

athr

um a

nd L

ibur

dy

Inte

rnat

iona

l Con

fere

nce

on R

equi

rem

ents

E

ngin

eerin

g 19

96

E

A

fram

ewor

k an

d to

ol

supp

ort

for

the

syst

emat

ic

test

ing

of

mod

el-b

ased

sp

ecifi

catio

ns

Mill

er a

nd S

troo

per

TO

SE

M

2003

D

A f

ram

ewor

k fo

r sp

ecifi

catio

n-ba

sed

clas

s te

stin

g

Liu

et

al.

ICE

CC

S

2002

D

A

Fra

mew

ork

for

Spe

cific

atio

n-B

ased

Tes

ting

Sto

cks

and

Car

ringt

on

IEE

E T

rans

actio

ns o

n S

oftw

are

Eng

inee

ring

1996

N

C

A f

ram

ewor

k fo

r ta

ble

driv

en t

estin

g of

Jav

a cl

ass

es

Dal

ey e

t al.

Sof

twar

e—P

ract

ice

& E

xper

ienc

e 20

02

D

A G

ener

ic M

odel

-Bas

ed T

est

Cas

e G

ener

ator

P

opov

ic a

nd V

elik

ic

EC

BS

20

05

B

A h

olis

tic a

ppro

ach

to te

st-d

riven

mod

el c

heck

ing

Bel

li an

d G

ulda

li In

tern

atio

nal c

onfe

renc

e on

Inno

vatio

ns in

A

pplie

d A

rtifi

cial

Inte

llige

nce

2005

E

A h

ybrid

com

pone

nt-b

ased

sys

tem

dev

elop

men

t pro

cess

T

eini

ker

et

al.

A

A m

etho

d fo

r th

e au

tom

atic

gen

erat

ion

of t

est s

uite

s fr

om o

bjec

t mod

els

Cav

arra

et

al.

AC

M s

ympo

sium

on

App

lied

com

putin

g 20

03

B

A m

etho

d of

gen

erat

ing

mas

sive

virt

ual

clie

nts

and

mod

el-b

ased

per

form

ance

te

st

Kim

Q

SIC

20

05

A

A m

etho

dolo

gy a

nd a

fram

ewor

k fo

r m

odel

-bas

ed te

stin

g Lu

cio

et al.

Lect

ure

Not

es in

Com

pute

r S

cien

ce

2005

N

C

A M

etho

dolo

gy o

f Ver

ifica

tion

and

Tes

ting

of L

arge

Sof

twar

e S

yste

ms

Li

paev

P

rogr

amm

ing

and

Com

putin

g S

oftw

are

2003

E

A m

odel

-bas

ed s

tatis

tical

usa

ge te

stin

g of

com

mun

icat

ion

prot

ocol

s

Pop

ovic

et

al.

Inte

rnat

iona

l Sym

posi

um a

nd W

orks

hop

on

Eng

inee

ring

of C

ompu

ter

Bas

ed S

yste

ms

B

A

mod

el-t

o-im

plem

enta

tion

map

ping

to

ol

for

auto

mat

ed

mod

el-b

ased

G

UI

test

ing

P

aiva

et al.

ICF

EM

20

05

B

A n

ew a

ppro

ach

to t

est

case

gen

erat

ion

base

d on

rea

l-tim

e pr

oces

s al

gebr

a (R

TF

A)

Yao

and

Wan

g C

anad

ian

Con

fere

nce

on E

lect

rical

and

C

ompu

ter

Eng

inee

ring

2004

E

A p

ract

ical

app

roac

h to

mod

ified

con

ditio

n/de

cisi

on c

over

age

H

ayhu

rst

and

Vee

rhus

en

DA

SC

20

01

C

A P

ract

ical

App

roac

h to

UM

L-ba

sed

Der

ivat

ion

of I

nteg

ratio

n T

ests

B

asan

ieri

and

Ber

tolin

o Q

WE

20

00

E

A r

elia

bilit

y es

timat

or fo

r m

odel

bas

ed s

oftw

are

test

ing

S

ayre

and

Poo

re

ISS

RE

E

A

rigor

ous

met

hod

for

test

te

mpl

ates

ge

nera

tion

from

ob

ject

-orie

nted

sp

ecifi

catio

ns

Per

iyas

amy

and

Ala

gar

Sof

twar

e T

estin

g V

erifi

catio

n a

nd R

elia

bilit

y 20

01

E

A s

chem

a la

ngua

ge f

or c

oord

inat

ing

cons

truc

tion

and c

ompo

sitio

n of

par

tial

beha

vior

des

crip

tions

G

riesk

amp

and

Kic

illof

S

CE

SM

20

06

E

A s

peci

ficat

ion

driv

en h

iera

rchi

cal t

est m

etho

dolo

gy

Sat

hian

atha

n an

d S

mith

IE

EE

Int

erna

tiona

l AS

IC C

onfe

renc

e an

d E

xhib

it

D

A s

peci

ficat

ion-

base

d ad

aptiv

e te

st c

ase

gene

ratio

n s

trat

egy

for

ope

n op

erat

ing

syst

em s

tand

ards

W

atan

abe

and

Sak

amur

a IC

SE

19

96

NC

A

spe

cific

atio

n-ba

sed

case

stu

dy fr

om t

est c

lass

fram

ewor

k Li

u an

d M

iao

Jour

nal o

f Sha

ngha

i Uni

vers

ity

2001

C

A

Sta

te-b

ased

App

roac

h to

Int

egra

tion

Tes

ting

for

Obj

ect-

Orie

nted

Pro

gram

s A

li et

al.

Tec

hnic

al R

epor

t of

Car

leto

n U

nive

rsity

20

05

NC

A

stu

dy o

f us

er a

ccep

tanc

e te

sts

Le

ung

and

Won

g S

oftw

are

Qua

lity

Con

trol

19

97

NC

A

tes

t da

ta g

ener

atio

n to

ol b

ased

on

inte

r-re

latio

n of

fiel

ds in

the

men

u st

ruct

ure

Le

e an

d C

hoi

Jour

nal o

f KIS

S: C

ompu

ting

Pra

ctic

es

200

3

E

A T

est G

ener

atio

n S

trat

egy

for

Pai

rwis

e T

estin

g

Tsu

chiy

a an

d K

ikun

o T

OS

EM

20

02

B

A t

est s

eque

nce

sele

ctio

n m

etho

d fo

r st

atec

hart

s H

ong

et

al.

Sof

twar

e T

estin

g V

erifi

catio

n &

Rel

iabi

lity

2000

D

A

the

ory

of s

peci

ficat

ion-

base

d te

stin

g fo

r ob

ject

-orie

nted

sof

twar

e B

arbe

y et

al.

ED

CC

19

96

Page 91: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

91

E

A t

ool f

or t

estin

g sy

nchr

onou

s so

ftwar

e

Par

issi

s A

chie

ving

Qua

lity

in S

oftw

are

A

A

tra

nsiti

on-b

ased

str

ateg

y fo

r ob

ject

-orie

nted

sof

twar

e te

stin

g T

raor

e A

CM

sym

posi

um o

n A

pplie

d co

mpu

ting

2003

A

A

UM

L-B

ased

App

roac

h to

Sys

tem

Tes

ting

Bria

nd a

nd L

abic

he

Tec

hnic

al R

epor

t of

Car

leto

n U

nive

rsity

20

02

NC

A

use

cas

e dr

iven

tes

ting

proc

ess:

tow

ards

a f

orm

al a

ppro

ach

base

d on

UM

L co

llabo

ratio

n di

agra

ms

B

adri

et al.

For

mal

App

roac

hes

to S

oftw

are

Tes

ting

2004

E

Act

ion

mac

hine

s -

tow

ards

a f

ram

ewo

rk f

or m

odel

com

posi

tion,

exp

lora

tion

and

conf

orm

ance

test

ing

base

d on

sym

bolic

com

puta

tion

G

riesk

amp

et

al.

Q

SIC

20

05

B

Act

ion

refin

emen

t in

conf

orm

ance

test

ing

Van

Der

Bijl

et

al.

Lect

ure

Not

es in

Com

pute

r S

cien

ce

2005

E

A

ctiv

ity b

ased

SW

pro

cess

as

basi

s fo

r IS

O 9

000

M

ahab

ala

CS

I C

omm

unic

atio

ns

1993

D

Add

ing

natu

ral

rela

tions

hips

to

sim

ulin

k m

odel

s to

im

prov

e au

tom

ated

mod

el-

base

d te

stin

g B

oden

and

Bus

ser

AIA

A/IE

EE

Dig

ital A

vion

ics

Sys

tem

s C

onfe

renc

e 20

04

D

AD

Lsco

pe:

an a

utom

ated

spe

cific

atio

n-ba

sed

unit

test

ing

tool

C

hang

and

Ric

hard

son

AS

E

1998

N

C

An

adap

tive

use

case

des

ign

driv

en t

estin

g

Kim

et al.

ISC

A

E

A

n an

alys

is a

nd t

estin

g m

etho

d fo

r Z

doc

umen

ts

Cia

ncar

ini e

t al.

Ann

als

of S

oftw

are

Eng

inee

ring

1997

E

An

anal

ysis

of

rule

cov

erag

e as

a c

riter

ion

in g

enera

ting

min

imal

tes

t su

ites

for

gram

mar

-bas

ed s

oftw

are

H

enne

ssy

and

Pow

er

AS

E

2005

E

An

Ana

lysi

s of

Tes

t D

ata

Sel

ectio

n C

riter

ia U

sing

the

RE

LAY

Mod

el o

f F

ault

Det

ectio

n

Ric

hard

son

and

Tho

mps

on

IEE

E T

rans

actio

ns o

n S

oftw

are

Eng

inee

ring

1993

B

An

appr

oach

to

de

tect

ing

dom

ain

erro

rs

usin

g fo

rmal

sp

ecifi

catio

n-ba

sed

test

ing

C

hen

and

Liu

Asi

a-P

acifi

c S

oftw

are

Eng

inee

ring

Con

fere

nce

2004

NC

A

n ap

proa

ch t

o ge

nera

te i

nteg

ratio

n te

st c

ases

bas

ed

on U

ML

colla

bora

tion

diag

ram

s zh

ang,

W.;

dong

, L.

&

liang

, Z

. A

cta

Ele

ctro

nica

Sin

ica

2004

D

An

appr

oach

to in

tegr

atio

n te

stin

g ba

sed

on d

ata

flow

spe

cific

atio

ns

Che

n et

al.

Lect

ure

Not

es in

Com

pute

r S

cien

ce

2005

E

A

n ap

proa

ch to

spe

cific

atio

n-ba

sed

test

ing

syst

ems

Z

in e

t al.

Sof

twar

e Q

ualit

y E

ngin

eerin

g 19

97

E

An

appr

oach

to v

erifi

catio

n an

d va

lidat

ion

of a

rel

iabl

e m

ultic

astin

g pr

otoc

ol

Cal

laha

n an

d M

ontg

omer

y IS

ST

A

1996

NC

A

n ex

per

imen

tal

eval

uatio

n of

a h

ighe

r-or

dere

d-t

yped-

func

tiona

l sp

ecifi

catio

n-ba

sed

test

-gen

erat

ion

tech

niqu

e S

inha

and

Sm

idts

E

mpi

rical

Sof

twar

e E

ngin

eerin

g 20

06

D

An

expl

orat

ive

jour

ney

from

arc

hite

ctur

al t

ests

def

initi

on d

own

to c

ode

test

s ex

ecut

ion

B

erto

lino

et

al.

IC

SE

20

01

E

An

exte

nded

fau

lt cl

ass

hier

arch

y fo

r sp

ecifi

catio

n-ba

sed

test

ing

Lau

and

Yu

AC

M T

rans

actio

ns o

n S

oftw

are

Eng

inee

ring

and

Met

hodo

logy

20

05

NC

A

n ex

tend

ed f

inite

sta

te m

achi

ne b

ased

gen

erat

ion

met

hod

of te

st s

uite

G

uo a

nd P

ing

Jour

nal o

f Sof

twar

e 20

01

E

An

impr

oved

mod

el-b

ased

met

hod

to te

st c

ircui

t faul

ts

Che

ng e

t al.

The

oret

ical

Com

pute

r S

cien

ce

2005

E

A

n in

tegr

ated

met

hod

for

desi

gnin

g us

er in

terf

aces

bas

ed o

n te

sts

S

chill

ing

et

al.

A-M

OS

T

2005

NC

A

n ob

ject

-bas

ed d

ata

flow

tes

ting

appr

oach

for

Web

app

licat

ions

Li

u et

al.

Inte

rnat

iona

l Jou

rnal

of S

oftw

are

Eng

inee

ring

and

Kno

wle

dge

Eng

inee

ring

2001

D

An

over

view

of

Lu

tess

: A

sp

ecifi

catio

n-ba

sed

tool

fo

r te

stin

g sy

nchr

onou

s so

ftwar

e

du B

ousq

uet

and

Zua

non

AS

E

1999

E

An

over

view

of m

odel

-bas

ed te

stin

g

Utt

ing

Tec

hniq

ue e

t Sci

ence

Info

rmat

ique

s 20

06

D

Ana

lyzi

ng s

oftw

are

arch

itect

ures

with

Arg

us-I

V

ieira

et al.

ICS

E

2000

E

App

licab

ility

of

non-

spec

ifica

tion-

base

d ap

proa

ches

to

logi

c te

stin

g fo

r so

ftwar

e K

obay

ashi

et

al.

Inte

rnat

iona

l Con

fere

nce

on D

epen

dabl

e S

yste

ms

and

Net

wor

ks

2001

NC

A

pplic

atio

n of

so

ftwar

e qu

ality

as

sura

nce

met

hods

in

va

lidat

ion

and

mai

nten

ance

of r

eact

or a

naly

sis

com

pute

r co

des

R

ezni

k R

eact

or P

hysi

cs a

nd R

eact

or C

ompu

tatio

ns

D

App

lyin

g co

nven

tiona

l tes

ting

tech

niqu

es f

or c

lass

test

ing

Chu

ng e

t al.

IEE

E C

ompu

ter

Soc

iety

's In

tern

atio

nal C

ompu

ter

Sof

twar

e &

App

licat

ions

Con

fere

nce

1996

NC

A

pply

ing

exte

nded

fin

ite

stat

e m

achi

nes

in

softw

are

test

ing

of

inte

ract

ive

syst

ems

Fan

tinat

o an

d Ji

no

Inte

ract

ive

Sys

tem

s: D

esig

n, S

peci

ficat

ion,

and

V

erifi

catio

n 20

03

Page 92: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

92

E

App

lyin

g m

odel

s in

you

r te

stin

g pr

oces

s R

osar

ia a

nd R

obin

son

Info

rmat

ion

and

Sof

twar

e T

echn

olog

y

2000

N

C

App

lyin

g m

utat

ion

anal

ysis

to S

DL

spec

ifica

tions

K

ovac

s et al.

SD

L 20

03: S

yste

m D

esig

n, P

roce

edin

gs

2003

A

A

pply

ing

Use

-Cas

e M

etho

dolo

gy to

SR

E a

nd S

yste

m T

estin

g M

eyer

and

San

dfos

s S

TA

R W

est

Con

fere

nce

1998

N

C

App

roac

h to

spe

cific

atio

n-ba

sed

test

ing

syst

ems

M

ohd

Zin

et

al.

SQ

E

1997

E

A

uto-

gene

ratin

g te

st s

eque

nces

usi

ng m

odel

che

ckers

: a

case

stu

dy

Hei

mda

hl e

t al.

For

mal

App

roac

hes

to S

oftw

are

Tes

ting

2003

N

C

Aut

omat

ed b

ound

ary

test

ing

from

Z a

nd B

Le

gear

d et

al.

FM

E

2002

E

Aut

omat

ed

cons

iste

ncy

and

com

plet

enes

s ch

ecki

ng

of

test

ing

mod

els

for

inte

ract

ive

syst

ems

Par

adka

r an

d K

linge

r In

tern

atio

nal C

ompu

ter

Sof

twar

e an

d A

pplic

atio

ns C

onfe

renc

e 20

04

A

Aut

omat

ed G

ener

atio

n of

Sta

tistic

al T

est

Cas

es f

rom

UM

L S

tate

Dia

gram

s C

heva

lley

and

Fos

se

CO

MP

SA

C

2005

E

Aut

omat

ed g

ener

atio

n of

tes

t pr

ogra

ms

from

clo

sed

spec

ifica

tions

of

clas

ses

and

test

cas

es

Leow

et

al.

ICS

E

2004

NC

A

utom

ated

gen

erat

ion

of te

st s

cena

rios

base

d on

UM

L sp

ecifi

catio

n

Nag

y A

utom

atiz

ace

2005

E

A

utom

ated

mod

el-b

ased

test

ing

of C

hi s

imul

atio

n m

odel

s w

ith T

orX

V

an O

sch

Lect

ure

Not

es in

Com

pute

r S

cien

ce

2005

D

Aut

omat

ed T

est

Cas

e G

ener

atio

n fo

r P

rogr

ams

Spe

cifie

d by

Rel

atio

nal A

lgeb

ra

Que

ries

T

sai e

t al.

IEE

E T

rans

actio

ns o

n S

oftw

are

Eng

inee

ring

1990

NC

A

utom

ated

tes

t cas

e ge

nera

tion

from

IFA

D V

DM

++

spe

cific

atio

ns

Nad

eem

and

Jaf

far-

Ur-

Reh

man

W

SE

AS

Tra

nsac

tions

on

Com

pute

rs

2005

B

Aut

omat

ed t

est

orac

les

for

GU

Is

Mem

on e

t al.

AC

M S

IGS

OF

T in

tern

atio

nal s

ympo

sium

on

Fou

ndat

ions

of s

oftw

are

engi

neer

ing

2000

B

Aut

omat

ed t

estin

g fr

om o

bjec

t mod

els

Pos

ton

Com

mun

icat

ions

of t

he A

CM

19

94

D

Aut

omat

ed T

estin

g of

Cla

sses

B

uy e

t al.

ISS

TA

20

00

C

Aut

omat

ed T

TC

N-3

tes

t ca

se g

ener

atio

n by

mea

ns o

f U

ML

sequ

ence

dia

gram

s an

d M

arko

v ch

ains

B

eyer

et

al.

AT

S

2003

B

Aut

omat

ed v

alid

atio

n te

st g

ener

atio

n

Web

er e

t al.

DA

SC

19

94

E

Aut

omat

ed v

erifi

catio

n an

d te

st c

ase

gene

ratio

n fo

r in

put

valid

atio

n

Liu

and

Tan

A

ST

20

06

C

Aut

omat

ed, c

ontr

act-

base

d us

er t

estin

g of

com

mer

cial

-off

-the

-she

lf co

mpo

nent

s B

riand

et al.

ICS

E

2006

A

Aut

omat

ed-g

ener

atin

g te

st c

ase

usin

g U

ML

stat

echa

rt d

iagr

ams

Kan

som

keat

and

R

ivep

iboo

n S

AIC

SIT

20

03

NC

A

utom

atic

co

nstr

uctio

n of

a

clas

s st

ate-

base

d te

stin

g m

odel

us

ing

met

hod

spec

ifica

tions

A

l-Dal

lal a

nd S

oren

son

IAS

TE

D: I

nter

natio

nal C

onfe

renc

e on

Com

pute

r S

cien

ce a

nd T

echn

olog

y 20

03

D

Aut

omat

ic e

xtra

ctio

n of

abs

trac

t-ob

ject

-sta

te m

achi

nes

from

uni

t-te

st e

xecu

tions

X

ie e

t al.

ICS

E

2006

E

Aut

omat

ic

gene

ratio

n of

te

st

case

s fr

om

Boo

lean

sp

ecifi

catio

ns

usin

g th

e M

UM

CU

T s

trat

egy

Y

u et al.

AS

E

2006

E

Aut

omat

ic G

ener

atio

n of

Tes

t O

racl

es—

Fro

m P

ilot S

tudi

es t

o A

pplic

atio

n

Fea

ther

and

Sm

ith

AS

E

2001

A

A

utom

atic

test

cas

e ge

nera

tion

for

UM

L ac

tivity

dia

gram

s M

ings

ong

et

al.

AS

T

2006

E

A

utom

atic

test

gen

erat

ion

for

pred

icat

es

Par

adka

r et al.

ISS

RE

19

96

A

Aut

omat

ic T

est G

ener

atio

n: A

Use

Cas

e D

riven

App

roac

h N

ebut

and

Fle

urey

IE

EE

Tra

nsac

tion

Sof

twar

e E

ngin

eerin

g 20

06

NC

A

utom

atic

UM

L-ba

sed

test

dat

a ge

nera

ting

tool

: AU

TE

G

Che

onga

h an

d B

youn

gju

Jour

nal o

f KIS

S: C

ompu

ting

Pra

ctic

es

2002

D

Aut

omat

ical

ly t

estin

g in

tera

ctin

g so

ftw

are

com

ponen

ts

Gal

lagh

er a

nd O

ffutt

AS

T

2006

B

Aut

omat

ing

form

al s

peci

ficat

ion-

base

d te

stin

g

Don

at

Inte

rnat

iona

l Joi

nt C

onfe

renc

e C

AA

P/F

AS

E o

n T

heor

y an

d P

ract

ice

of S

oftw

are

Dev

elop

men

t 19

97

A

Aut

omat

ing

impa

ct

anal

ysis

an

d re

gres

sion

te

st

sele

ctio

n ba

sed

on

UM

L

desi

gns

Bria

nd e

t al.

ICS

M

2002

D

Aut

omat

ing

softw

are

mod

ule

test

ing

for

FA

A c

ertif

icat

ion

S

anth

anam

A

CM

SIG

Ada

inte

rnat

iona

l con

fere

nce

on A

da

2001

E

A

utom

atin

g S

peci

ficat

ion-

Bas

ed S

oftw

are

Tes

ting

Pos

ton

IEE

E C

ompu

ter

Soc

iety

Pre

ss

1997

N

C

Aut

omat

ing

test

gen

erat

ion

for

disc

rete

eve

nt o

rient

ed e

mbe

dded

sys

tem

s

Cun

ning

and

Roz

enbl

it Jo

urn

al o

f Int

ellig

ent a

nd R

obot

ic S

yste

ms

2005

A

A

utom

atio

n of

GU

I tes

ting

usin

g a

mod

el-d

riven

appr

oach

V

ieira

et al.

AS

T

2006

N

C

Axi

omat

ic a

sses

smen

t of

logi

c co

vera

ge s

oftw

are

test

ing

crite

ria

Liu

and

Mia

o R

uan

Jian

Xue

Bao

/Jou

rnal

of S

oftw

are

2004

Page 93: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

93

E

Beh

avio

r m

odel

ing

tech

niqu

e ba

sed

on E

FS

M f

or in

tero

pera

bilit

y te

stin

g

Noh

et

al.

ICC

SA

20

05

NC

B

ehav

ior-

base

d in

tegr

atio

n te

stin

g of

so

ftwar

e sy

stem

s:

a fo

rmal

sc

enar

io

appr

oach

P

ei e

t al.

Inte

rnat

iona

l Con

fere

nce

on S

yste

ms

Inte

grat

ion

NC

B

enef

its o

f us

ing

mod

el-b

ased

tes

ting

tool

s

Bru

no e

t al.

S

ympo

sium

on

Sof

twar

e Q

ualit

y 19

95

D

Bla

ck-b

ox

test

ing

usin

g flo

wgr

aphs

: A

n ex

perim

enta

l as

sess

men

t of

ef

fect

iven

ess

and

auto

mat

ion

pote

ntia

l E

dwar

ds

Sof

twar

e T

estin

g V

erifi

catio

n an

d R

elia

bilit

y 20

00

B

Bou

ndar

y C

over

age

Crit

eria

for

Tes

t Gen

erat

ion

from

For

mal

Mod

els

K

osm

atov

et al.

ISS

RE

20

04

E

Bui

ldin

g te

stab

le s

oftw

are

Z

ucco

ni a

nd R

eed

AC

M S

IGS

OF

T S

oftw

are

Eng

inee

ring

Not

es

1996

E

Cau

se-e

ffec

t gr

aphi

ng a

naly

sis

and

valid

atio

n of

req

uire

men

ts

Nur

sim

ulu

and

Pro

bert

C

onfe

renc

e of

the

Cen

tre

for

Adv

ance

d S

tudi

es

on C

olla

bora

tive

rese

arch

19

95

D

Com

bini

ng a

lgeb

raic

and

mod

el-b

ased

tes

t cas

e ge

ner

atio

n D

an a

nd A

iche

rnig

Le

ctur

e N

otes

in C

ompu

ter

Sci

ence

20

05

B

Com

bini

ng b

ehav

ior

and

data

mod

elin

g in

aut

omat

ed t

est c

ase

gene

ratio

n

Sch

roed

er e

t al.

QS

IC

2003

E

C

ompa

rison

of f

ault

clas

ses

in s

peci

ficat

ion-

base

d te

stin

g O

kun

et

al.

Info

rmat

ion

and

Sof

twar

e T

echn

olog

y 20

04

E

Con

firm

ing

conf

igur

atio

ns in

EF

SM

tes

ting

Pet

renk

o et

al.

IEE

E T

rans

actio

ns o

n S

oftw

are

Eng

inee

ring

2004

B

Con

stru

ctin

g m

ultip

le

uniq

ue

inpu

t/ou

tput

se

quen

ces

usin

g m

etah

euris

tic

optim

isat

ion

tech

niqu

es

Guo

et

al.

IEE

Pro

ceed

ings

Sof

twar

e 20

05

E

Con

stru

ctin

g te

st s

uite

s fo

r in

tera

ctio

n te

stin

g

Coh

en e

t al.

ICS

E

2003

E

C

ontin

uous

TT

CN

-3: t

estin

g of

em

bedd

ed c

ontr

ol s

yste

ms

S

chie

ferd

ecke

r et al.

SE

AS

20

06

D

Con

trol

ling

test

cas

e ex

plos

ion

in te

st g

ener

atio

n fr

om B

form

al m

odel

s Le

gear

d et

al.

Sof

twar

e T

estin

g, V

erifi

catio

n &

Rel

iabi

lity'

20

04

B

Cov

erag

e m

etric

s fo

r re

quire

men

ts-b

ased

test

ing

W

hale

n et

al.

ISS

TA

20

06

D

Cov

erag

e-di

rect

ed

test

ge

nera

tion

with

m

odel

ch

ecke

rs:

chal

leng

es

and

oppo

rtun

ities

D

evar

aj e

t al.

CO

MP

SA

C

2005

E

Crit

eria

for

gen

erat

ing

spec

ifica

tion-

base

d te

sts

O

ffut

t et

al.

ICE

CC

S

1999

NC

D

AS

-BO

OT

: de

sign

-,

arch

itect

ure-

an

d sp

ecifi

catio

n-ba

sed

appr

oach

es

to

obje

ct-o

rient

ed te

stin

g

Ric

hard

son

AC

M S

IGS

OF

T S

oftw

are

Eng

inee

ring

2000

B

Dat

a ab

stra

ctio

n an

d co

nstr

aint

sol

ving

for

con

form

ance

test

ing

C

alam

e et al.

AP

SE

C

2005

E

D

ata

flow

tes

ting

as m

odel

che

ckin

g

Hon

g et

al.

ICS

E

2003

N

C

Dat

a G

ener

atio

n fo

r P

ath

Tes

ting

M

anso

ur a

nd S

alam

e S

oftw

are

Qua

lity

Con

trol

20

04

D

Dee

pTra

ns

- a

mod

el-b

ased

ap

proa

ch

to

func

tiona

l ve

rific

atio

n of

ad

dres

s tr

ansl

atio

n m

echa

nism

s

Adi

r et al.

4th I

nter

natio

nal W

orks

hop

on M

icro

proc

esso

r T

est a

nd V

erifi

catio

n 20

03

E

Dem

onst

ratio

n of

an

oper

atio

nal

proc

edur

e fo

r th

e m

odel

-bas

ed t

estin

g of

CT

I sy

stem

s

Hag

erer

et

al.

Inte

rnat

iona

l Con

fere

nce

on F

unda

men

tal

App

roac

hes

to S

oftw

are

Eng

inee

ring

2002

B

Dep

ende

nce

anal

ysis

in r

educ

tion

of r

equi

rem

ent b

ased

test

sui

tes

Vay

sbur

g et al.

Inte

rnat

iona

l Sym

posi

um o

n S

oftw

are

Tes

ting

and

Ana

lysi

s 20

02

E

Der

ivin

g op

erat

iona

l sof

twar

e sp

ecifi

catio

ns fr

om s

yste

m g

oals

Le

tier

and

van

Lam

swee

rde

AC

M S

IGS

OF

T s

ympo

sium

on

Fou

ndat

ions

of

softw

are

engi

neer

ing

2002

NC

D

eriv

ing

test

cas

es fo

r co

mpo

site

ope

ratio

ns in

Obj

ect-

Z s

peci

ficat

ions

P

eriy

asam

y et

al.

D

Der

ivin

g te

st p

lans

from

arc

hite

ctur

al d

escr

iptio

ns

Ber

tolin

o et

al.

ICS

E

2000

A

D

eriv

ing

test

s fr

om U

ML

2.0

sequ

ence

dia

gram

s w

ith n

eg a

nd a

sser

t Lu

nd a

nd S

tøle

n A

ST

20

06

D

Des

ign

and

impl

emen

tatio

n of

Triv

eni:

a pr

oces

s-al

geb

raic

AP

I fo

r th

read

s +

ev

ents

C

olby

et

al.

Inte

rnat

iona

l Con

fere

nce

on C

ompu

ter

Lang

uage

s 19

98

D

Dev

elop

ing

a T

TC

N-3

tes

t har

ness

for

lega

cy s

oftw

are

O

kika

et al.

AS

T

2006

D

Dis

trib

uted

sof

twar

e te

stin

g w

ith s

peci

ficat

ion

Cha

ng e

t al

. IE

EE

Com

pute

r S

ocie

ty's

Inte

rnat

iona

l Com

pute

r S

oftw

are

& A

pplic

atio

ns C

onfe

renc

e 19

90

B

Dom

ain

spec

ific

test

cas

e ge

nera

tion

usin

g hi

gher

ord

ered

typ

ed la

ngua

ges

for

spec

ifica

tion

Sin

ha a

nd S

mid

ts

Uni

vers

ity o

f Mar

ylan

d at

Col

lege

Par

k

2005

E

Ear

ly e

stim

atio

n of

def

ect d

ensi

ty u

sing

an

in-p

roce

ss H

aske

ll m

etric

s m

odel

M

ark

She

rriff

A

-MO

ST

20

05

E

Eco

nom

ic p

ersp

ectiv

es i

n te

st a

utom

atio

n: b

alan

cing

aut

omat

ed a

nd m

anua

l te

stin

g w

ith o

ppor

tuni

ty c

ost

Ram

ler

and

Wol

fmai

er

AS

T

2006

Page 94: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

94

E

Eng

inee

ring

with

log

ic:

HO

L sp

ecifi

catio

n an

d sy

mbo

lic-e

valu

atio

n te

stin

g fo

r T

CP

impl

emen

tatio

ns

Bis

hop

et

al.

Ann

ual A

CM

Sym

posi

um o

n P

rinci

ples

of

Pro

gram

min

g La

ngua

ges

2006

E

Enh

ance

d te

stin

g of

dom

ain

spec

ific

appl

icat

ions

by

auto

mat

ic e

xtra

ctio

n of

ax

iom

s fr

om fu

nctio

nal s

peci

ficat

ions

S

inha

et al.

ISS

RE

20

03

B

Env

ironm

ent

beha

vior

mod

els

for

scen

ario

gen

erat

ion

and

test

ing

auto

mat

ion

A

ugus

ton

et al.

A

-MO

ST

20

05

B

Eva

luat

ing

seve

ral

path

-bas

ed p

artia

l dy

nam

ic a

naly

sis

met

hods

for

sel

ectin

g bl

ack-

box

gene

rate

d te

st c

ases

C

han

and

Yu

QS

IC

2004

E

Exp

erie

nce

With

Tea

chin

g B

lack

-Box

Tes

ting

in a

Com

pute

r S

cien

ce/S

oftw

are

Eng

inee

ring

Cur

ricul

um

Che

n an

d P

oon

IEE

E T

rans

actio

ns o

n E

duca

tion

2004

E

Exp

erim

enta

l Mod

al A

naly

sis

and

Com

puta

tiona

l Mod

el U

pdat

ing

of a

Car

Bod

y in

Whi

te

Sch

edlin

ski e

t al.

Inte

rnat

iona

l Con

fere

nce

on N

oise

and

Vib

ratio

n E

ngin

eerin

g 20

04

NC

E

xten

ded

mod

el-b

ased

tes

ting

tow

ard

high

cod

e co

vera

ge r

ate

Tak

ahas

hi a

nd K

akud

a S

OF

TW

AR

E Q

UA

LIT

YE

CS

Q

2002

E

Ext

endi

ng

Sim

ulin

k M

odel

s W

ith

Nat

ural

R

elat

ions

T

o

Impr

ove

Aut

omat

ed

Mod

el-B

ased

Tes

ting

B

oden

et al.

S

EW

20

05

D

Ext

endi

ng t

est t

empl

ates

with

inhe

ritan

ce

Mur

ray

et

al.

AS

WE

C

1997

E

Fau

lt cl

asse

s an

d er

ror

dete

ctio

n ca

pabi

lity

of s

peci

ficat

ion-

base

d te

stin

g K

uhn

AC

M T

rans

actio

ns o

n S

oftw

are

Eng

inee

ring

and

Met

hodo

logy

19

99

E

Fau

lt m

odel

-driv

en t

est

deriv

atio

n fr

om f

inite

sta

te m

odel

s P

etre

nko

Mod

elin

g an

d ve

rific

atio

n of

para

llel p

roce

sses

20

01

NC

F

eatu

re in

tera

ctio

n de

tect

ion

usin

g a

sync

hron

ous

appr

oach

and

tes

ting

du B

ousq

uet

et al.

Com

pute

r N

etw

orks

20

00

E

For

mal

met

hods

sof

twar

e en

gine

erin

g fo

r th

e C

AR

A s

yste

m

Mar

tin

Inte

rnat

iona

l Jou

rnal

on

Sof

twar

e T

ools

for

Tec

hnol

ogy

Tra

nsfe

r 20

04

NC

F

orm

al S

peci

ficat

ion

Bas

ed S

oftw

are

Tes

ting:

An

Aut

omat

ed A

ppro

ach

Gill

and

Bha

tia

Pro

ceed

ings

of t

he I

nter

natio

nal C

onfe

renc

e on

S

oftw

are

Eng

inee

ring

Res

earc

h an

d P

ract

ise

2003

C

For

mal

test

-cas

e ge

nera

tion

for

UM

L st

atec

hart

s G

nesi

et

al.

IEE

E I

CE

CC

S

2004

D

F

orm

ally

tes

ting

fail-

safe

ty o

f ele

ctro

nic

purs

e pr

otoc

ols

Ju

rjens

and

Wim

mel

A

SE

20

01

B

Fro

m

faul

ts

via

test

pu

rpos

es

to

test

ca

ses:

on

th

e fa

ult-

base

d te

stin

g of

co

ncur

rent

sys

tem

s

D

Fro

m

MC

/DC

to

R

C/D

C:

For

mal

izat

ion

and

anal

ysis

of

co

ntro

l-flo

w

test

ing

crite

ria

Vilk

omir

and

Bow

en

For

mal

Asp

ects

of

Com

putin

g 20

06

D

Fro

m O

bjec

t-Z

spe

cific

atio

ns to

Cla

ssB

ench

tes

t sui

tes

Car

ringt

on e

t al

. Jo

urna

l of S

oftw

are

Tes

ting

Ver

ifica

tion

and

Rel

iabi

lity

2000

E

Fro

m U

2TP

mod

els

to e

xecu

tabl

e te

sts

with

TT

CN

-3:

an a

ppro

ach

to m

odel

dr

iven

tes

ting

Z

ande

r et

al.

T

estin

g of

com

mun

icat

ing

syst

ems

2005

E

Gen

erat

ing

a te

st o

racl

e fr

om p

rogr

am d

ocum

enta

tion:

wor

k in

pro

gres

s

Pet

ers

and

Par

nas

AC

M S

IGS

OF

T in

tern

atio

nal s

ympo

sium

on

Sof

twar

e te

stin

g an

d an

alys

is

1994

B

Gen

erat

ing

func

tiona

l tes

t ca

ses

in-t

he-la

rge

for

time-

criti

cal s

yste

ms

from

logi

c-ba

sed

spec

ifica

tions

M

oras

ca e

t al.

ISS

TA

19

96

E

Gen

erat

ing

optim

al d

istin

guis

hing

seq

uenc

es w

ith a

mod

el c

heck

er

Mal

lett

et

al.

A-M

OS

T

2005

E

Gen

erat

ing

orac

les

from

you

r fa

vorit

e te

mpo

ral l

ogi

c sp

ecifi

catio

ns

Dill

on a

nd R

amak

rishn

a A

CM

SIG

SO

FT

Sym

posi

um o

n th

e F

ound

atio

ns

of S

oftw

are

Eng

inee

ring

1996

D

Gen

erat

ing

regr

essi

on t

ests

via

mod

el c

heck

ing

Li

hua

et

al.

CO

MP

SA

C

2004

N

C

Gen

erat

ing

test

cas

e ba

sed

on U

ML

stat

echa

rts

M

iao-

Hua

i et

al.

Min

i Mic

ro S

yste

ms

2005

B

G

ener

atin

g te

st c

ases

for

rea

l-tim

e sy

stem

s fr

om lo

gic

spec

ifica

tions

M

andr

ioli

et

al.

AC

M T

rans

actio

ns o

n C

ompu

ter

Sys

tem

s 19

95

C

Gen

erat

ing

Tes

t C

ases

fro

m a

n O

O M

odel

with

an

AI P

lann

ing

Sys

tem

S

chee

tz e

t al.

ISS

RE

19

99

B

Gen

erat

ing

test

cas

es f

rom

cla

ss v

ecto

rs

Leun

g et

al.

Jour

nal o

f Sys

tem

s an

d S

oftw

are

2003

A

G

ener

atin

g te

st c

ases

fro

m U

ML

activ

ity d

iagr

am b

ased

on

Gra

y-bo

x m

etho

d Li

nzha

ng e

t al.

AP

SE

C

2004

N

C

Gen

erat

ing

test

dat

a fo

r sp

ecifi

catio

n-ba

sed

test

s vi

a qu

asira

ndom

seq

uenc

es

Che

et

al.

Lect

ure

Not

es in

Com

pute

r S

cien

ce

2006

B

G

ener

atin

g te

st d

ata

from

SO

FL

spec

ifica

tions

O

ffut

t an

d Li

u Jo

urna

l of S

yste

ms

and

Sof

twar

e 19

99

A

Gen

erat

ing

Tes

t D

ata

from

Sta

te-B

ased

Spe

cific

atio

ns

Off

utt

et

al.

Jour

nal o

f Sof

twar

e T

estin

g, V

erifi

catio

n an

d 20

03

Page 95: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

95

Rel

iabi

lity

C

Gen

erat

ing

Tes

t S

eque

nces

fr

om

UM

L S

eque

nce

Dia

gram

s an

d S

tate

D

iagr

ams

Sok

enou

In

form

atik

For

schu

ng u

nd E

ntw

ickl

ung

2006

B

Gen

erat

ing

test

sui

tes

for

soft

war

e lo

ad t

estin

g

Avr

itzer

and

Wey

uker

In

tern

atio

nal s

ympo

sium

on

Sof

twar

e te

stin

g an

d an

alys

is

1994

A

Gen

erat

ing

test

s fr

om U

ML

spec

ifica

tions

O

ffut

t an

d A

bdur

azik

U

ML

1999

E

G

ener

atin

g tr

ansi

tion

prob

abili

ties

to s

uppo

rt m

ode

l-bas

ed s

oftw

are

test

ing

Wal

ton

and

Poo

re

Sof

twar

e -

Pra

ctic

e an

d E

xper

ienc

e

2000

B

Gen

erat

ing,

sel

ectin

g an

d pr

iorit

izin

g te

st c

ases

fro

m s

peci

ficat

ions

with

too

l su

ppor

t

Yu

et al.

QS

IC

2003

NC

G

ener

atio

n of

rel

iabi

lity

test

dat

a w

ith U

ML

for

real

-tim

e em

bedd

ed s

oftw

are

Ju

n an

d M

inya

n Jo

urna

l of B

eijin

g U

nive

rsity

of A

eron

autic

s an

d A

stro

naut

ics

2003

E

Gen

erat

ion

of t

est

sequ

ence

s fr

om f

orm

al s

peci

ficat

ions

: G

SM

11-

11 s

tand

ard

case

stu

dy

Ber

nard

et

al.

Sof

twar

e -

Pra

ctic

e an

d E

xper

ienc

e 200

4

B

Iden

tific

atio

n of

cat

egor

ies

and

choi

ces

in a

ctiv

ity d

iagr

ams

C

hen

et

al.

QS

IC

2005

E

Impl

emen

tatio

n of

mod

el b

ased

int

ellig

ent

next

gen

era

tion

test

gen

erat

or u

sing

ne

ural

net

wor

ks

Sin

ger

SP

IE T

he In

tern

atio

nal S

ocie

ty fo

r O

ptic

al

Eng

inee

ring

1996

D

Impr

ovin

g de

sign

de

pend

abili

ty

by

exp

loiti

ng

an

open

m

odel

-bas

ed

spec

ifica

tion

Tom

ita a

nd S

akam

ura

IEE

E T

rans

actio

ns o

n C

ompu

ters

19

99

E

Impr

ovin

g fu

nctio

nal t

estin

g us

ing

mod

el-b

ased

dia

gnos

tics

Nol

an a

nd C

arey

P

roce

edin

gs o

f the

Tec

hnic

al P

rogr

am

E

Im

prov

ing

func

tiona

l/dia

gnos

tic te

stin

g us

ing

mod

el-b

ased

rea

soni

ng

Car

ey a

nd D

ussa

ult

IEE

E A

UT

OT

ES

TC

ON

Pro

ceed

ings

19

98

C

Impr

ovin

g S

tate

-Bas

ed C

over

age

Crit

eria

Usi

ng D

ata

Flo

w I

nfor

mat

ion

Bria

nd e

t al.

Tec

hnic

al R

epor

t of

Car

leto

n U

nive

rsity

20

04

D

Impr

ovin

g te

st s

uite

s vi

a op

erat

iona

l abs

trac

tion

Har

der

et

al.

ICS

E

2003

B

Im

prov

ing

web

app

licat

ion

test

ing

with

use

r se

ssio

n da

ta

Elb

aum

et

al.

ICS

E

2003

E

In

thi

s Is

sue

B

riand

and

Bas

ili

Em

piric

al S

oftw

are

Eng

inee

ring

2005

E

Incr

easi

ng

depe

ndab

ility

by

m

eans

of

m

odel

-bas

ed

acce

ptan

ce

test

in

side

R

TO

S

Zha

o et

al.

P

aral

lel P

roce

ssin

g an

d A

pplie

d M

athe

mat

ics

2005

B

In-p

aram

eter

-ord

er:

a te

st g

ener

atio

n st

rate

gy f

or

pairw

ise

test

ing

Le

i and

Tai

IE

EE

Int

erna

tiona

l Hig

h-A

ssur

ance

Sys

tem

s E

ngin

eerin

g S

ympo

sium

19

98

E

Inte

grat

ing

auto

mat

ed t

est

gene

ratio

n in

to t

he W

YS

IWY

T s

prea

dshe

et t

estin

g m

etho

dolo

gy

Fis

her

et

al.

TO

SE

M

2006

E

Inte

grat

ing

form

al s

peci

ficat

ion

and

softw

are

verific

atio

n an

d va

lidat

ion

Duk

e et al.

Tea

chin

g F

orm

al M

etho

ds

2004

C

In

tegr

atio

n of

‘com

pone

nts’

to

test

sof

twar

e co

mpon

ents

B

erto

lino

et

al.

Ele

ctro

nic

Not

es in

The

oret

ical

Com

pute

r S

cien

ce

2003

B

In

tegr

atio

n of

spe

cific

atio

n-ba

sed

and

CR

-bas

ed a

ppro

ache

s fo

r G

UI

test

ing

Che

n et

al.

AIN

A

2005

C

Intr

oduc

ing

a R

easo

nabl

y C

ompl

ete

and

Coh

eren

t A

ppro

ach

for

Mod

el-b

ased

T

estin

g B

erto

lino

et

al.

Ele

ctro

nic

Not

es in

The

oret

ical

Com

pute

r S

cien

ce

2005

D

JUM

BL:

a t

ool f

or m

odel

-bas

ed s

tatis

tical

test

ing

Pro

wel

l A

nnua

l Haw

aii I

nter

natio

nal C

onfe

renc

e on

S

yste

m S

cien

ces

2003

D

Kor

at: a

utom

ated

tes

ting

base

d on

Jav

a pr

edic

ates

B

oyap

ati e

t al.

ISS

TA

20

02

B

Less

ons

lear

ned

from

aut

omat

ing

test

s fo

r an

ope

ratio

ns s

uppo

rt s

yste

m

Fec

ko a

nd L

ott

Sof

twar

e—P

ract

ice

& E

xper

ienc

e 20

02

D

Lute

ss: a

spe

cific

atio

n-dr

iven

tes

ting

envi

ronm

ent f

or s

ynch

rono

us s

oftw

are

du B

ousq

uet

et al.

ICS

E

1999

E

Mar

s P

olar

Lan

der

faul

t ide

ntifi

catio

n us

ing

mod

el-b

ased

test

ing

B

lack

burn

et al.

IEE

E I

nter

natio

nal C

onfe

renc

e on

Eng

inee

ring

of

Com

plex

Com

pute

r S

yste

ms

2002

E

Mar

s P

olar

Lan

der

faul

t ide

ntifi

catio

n us

ing

mod

el-b

ased

test

ing

Bla

ckbu

rn e

t al.

Ann

ual N

AS

A G

odda

rd S

oftw

are

Eng

inee

ring

Wor

ksho

p 20

01

B

Mas

terin

g te

st g

ener

atio

n fr

om s

mar

t ca

rd s

oftw

are

form

al m

odel

s B

ouqu

et e

t al.

Lect

ure

Not

es in

Com

pute

r S

cien

ce

2005

D

MaT

eLo

- st

atis

tical

usa

ge t

estin

g by

ann

otat

ed s

eque

nce

diag

ram

s, M

arko

v ch

ains

and

TT

CN

-3

Dul

z an

d Z

hen

Inte

rnat

iona

l Con

fere

nce

on Q

ualit

y S

oftw

are

2003

E

Mic

ro a

rchi

tect

ure

cove

rage

dire

cted

gen

erat

ion

of t

est p

rogr

ams

U

r an

d Y

adin

A

CM

/IE

EE

con

fere

nce

on D

esig

n au

tom

atio

n 19

99

NC

M

odel

bas

ed d

evel

opm

ent

of e

mbe

dded

veh

icle

sof

twar

e at

Dai

mle

rChr

ysle

r

Con

rad

et

al.

Info

rmat

ik F

orsc

hung

und

Ent

wic

klun

g 20

05

Page 96: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

96

D

Mod

el b

ased

reg

ress

ion

test

red

uctio

n us

ing

depe

nde

nce

anal

ysis

K

orel

et al.

ICS

M

2002

B

M

odel

bas

ed t

estin

g in

evo

lutio

nary

sof

twar

e de

velo

pmen

t P

rets

chne

r et al.

RS

P

2001

N

C

Mod

el b

ased

tes

ting

in in

crem

enta

l sys

tem

dev

elopm

ent

Pre

tsch

ner

et al.

Jour

nal o

f Sys

tem

s an

d S

oftw

are

2004

B

M

odel

Bas

ed T

estin

g in

Pra

ctic

e at

Mic

roso

ft S

tobi

e E

lect

roni

c N

otes

in T

heor

etic

al C

ompu

ter

Sci

ence

20

05

E

Mod

el c

heck

ing

B

erg

and

Raf

felt

Mod

el B

ased

Tes

ting

of R

eact

ive

Sys

tem

s

E

Mod

el-B

ased

App

roac

hes

for

Val

idat

ing

Bus

ines

s C

ritic

al S

yste

ms

A

ugus

to e

t al.

S

TE

P

2003

C

M

odel

-Bas

ed B

uilt-

In T

ests

G

ross

et al.

Ele

ctro

nic

Not

es in

The

oret

ical

Com

pute

r S

cien

ce

2005

B

M

odel

-bas

ed f

orm

al s

peci

ficat

ion

dire

cted

test

ing o

f ab

stra

ct d

ata

type

s

Jia

CO

MP

SA

C

1993

D

Mod

el-b

ased

fu

nctio

nal

conf

orm

ance

te

stin

g of

w

eb

serv

ices

op

erat

ing

on

pers

iste

nt d

ata

Sin

ha a

nd P

ardk

ar

TA

V-W

EB

20

06

B

Mod

el-b

ased

spe

cific

atio

n an

d te

stin

g ap

plie

d to

the

Gro

und-

Bas

ed M

idco

urse

D

efen

se (

GM

D)

syst

em:

an in

dust

ry r

epor

t La

key

A

-MO

ST

20

05

NC

M

odel

-bas

ed s

yste

m te

stin

g of

sof

twar

e pr

oduc

t fa

mili

es

Reu

ys e

t al.

Lect

ure

Not

es in

Com

pute

r S

cien

ce

2005

B

Mod

el-b

ased

tes

t cas

e ge

nera

tion

for

smar

t car

ds

Phi

lipps

et al.

Ele

ctro

nic

Not

es in

The

oret

ical

Com

pute

r S

cien

ce

2003

E

Mod

el-B

ased

Tes

t D

riven

Dev

elop

men

t of

the

Tef

kat

Mod

el-T

rans

form

atio

n E

ngin

e

Ste

el a

nd L

awle

y IS

SR

E'

2004

NC

M

odel

-bas

ed t

est

gene

ratio

n an

d im

plem

enta

tion

H

artm

an a

nd P

roey

en

Info

rmat

ie

2003

E

M

odel

-bas

ed t

estin

g

Pre

tsch

ner

IC

SE

20

05

A

Mod

el-b

ased

tes

ting

and

mai

nten

ance

D

eng

et

al.

ICM

SE

2004

E

Mod

el-b

ased

Tes

ting

Con

side

ring

Cos

t, R

elia

bilit

y an

d S

oftw

are

Qua

lity

H

toon

and

The

in

AP

SIT

T

2005

A

M

odel

-bas

ed t

estin

g fo

r ap

plic

atio

ns d

eriv

ed f

rom

sof

twar

e pr

oduc

t lin

es

Olim

piew

and

Gom

aa

A-M

OS

T

2005

E

M

odel

-bas

ed t

estin

g fo

r en

terp

rise

soft

war

e so

lutio

ns

Jain

C

OM

PS

AC

20

05

B

Mod

el-b

ased

tes

ting

for

real

: th

e in

hous

e ca

rd c

ase

stud

y

Pre

tsch

ner

et al.

Inte

rnat

iona

l Jou

rnal

on

Sof

twar

e T

ools

for

Tec

hnol

ogy

Tra

nsfe

r (S

TT

T)

20

01

A

Mod

el-B

ased

Tes

ting

from

UM

L M

odel

s B

erna

rd e

t al.

Info

rmat

ik F

orsc

hung

und

Ent

wic

klun

g 20

06

B

Mod

el-b

ased

tes

ting

in p

ract

ice

Dal

al e

t al.

ICS

E

1999

B

Mod

el-b

ased

tes

ting

of a

hig

hly

prog

ram

mab

le s

yste

m

Dal

al e

t al.

Inte

rnat

iona

l Sym

posi

um o

n S

oftw

are

Rel

iabi

lity

Eng

inee

ring

1998

E

Mod

el-b

ased

te

stin

g of

co

ncur

rent

pr

ogra

ms

with

pr

edi

cate

se

quen

cing

co

nstr

aint

s

Wu

and

Lin

QS

IC

2005

A

Mod

el-b

ased

tes

ting

of o

bjec

t-or

ient

ed s

yste

ms

Rum

pe

For

mal

Met

hods

for

Com

pone

nts

and

Obj

ects

20

03

NC

M

odel

-bas

ed t

estin

g th

roug

h a

GU

I

Ker

vine

n et

al.

FA

TE

S

2005

C

Mod

el-b

ased

tes

ting

with

UM

L a

pplie

d to

a r

oam

ing

alg

orith

m f

or B

luet

ooth

de

vice

s Z

hen

et

al.

Jour

nal o

f Zhe

jiang

Uni

vers

ity S

cien

ce

2004

E

Mod

el-B

ased

Tes

ting:

Cha

lleng

es A

head

H

eim

dahl

C

OM

PS

AC

20

05

E

Mod

el-b

ased

tes

ts o

f tru

ism

s

Men

zies

et

al.

AS

E

2002

N

C

Mod

elin

g an

d te

stin

g ag

ent s

yste

ms

base

d on

sta

tech

arts

S

eo e

t al.

FORTE Workshops

2004

B

M

odel

ing

requ

irem

ents

for

com

bina

toria

l sof

twar

e te

stin

g

Lott

et

al.

A-M

OS

T

2005

E

M

odel

lges

tutz

te E

rpro

bung

smet

hodi

k in

der

Ant

riebss

tran

gent

wic

klun

g A

lber

s an

d S

chyr

V

DI B

eric

hte

2005

E

M

odel

ling

the

qual

ity e

cono

mic

s of

def

ect-

dete

ctio

n te

chni

ques

W

agne

r W

oSQ

20

06

B

Mod

els

for

sync

hron

ous

soft

war

e te

stin

g

Lake

hal e

t al.

Inte

rnat

iona

l Wor

ksho

p on

Mod

el, D

esig

n an

d V

alid

atio

n 20

04

E

Mul

tiple

xing

of

part

ially

ord

ered

eve

nts

Cam

pbel

l et al.

Lect

ure

Not

es in

Com

pute

r S

cien

ce

2005

E

M

utat

ion

oper

ator

s fo

r O

bjec

t-Z

spe

cific

atio

n

Liu

and

Mia

o IC

EC

CS

20

05

E

Mut

atio

n T

estin

g A

pplie

d to

Est

elle

Spe

cific

atio

ns

S

ouza

et

al.

Sof

twar

e Q

ualit

y C

ontr

ol

1999

B

M

utat

ion-

base

d te

stin

g cr

iteria

for

timel

ines

s

Nils

son

et al.

CO

MP

SA

C

2004

E

M

utua

lly e

nhan

cing

tes

t ge

nera

tion

and

spec

ifica

tion

infe

renc

e

Tao

and

Not

kin

For

mal

App

roac

hes

to S

oftw

are

Tes

ting

E

N

on-s

peci

ficat

ion-

base

d ap

proa

ches

to lo

gic

test

ing

for

softw

are

Kob

ayas

hi e

t al.

Info

rmat

ion

and

Sof

twar

e T

echn

olog

y 20

02

Page 97: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

97

E

On

faul

t cla

sses

and

err

or d

etec

tion

capa

bilit

y of s

peci

ficat

ion-

base

d te

stin

g T

such

iya

and

Kik

uno

AC

M T

rans

actio

ns o

n S

oftw

are

Eng

inee

ring

and

Met

hodo

logy

20

02

B

On

the

Com

plex

ity o

f G

ener

atin

g O

ptim

al T

est S

eque

nces

B

oyd

and

Ura

l IE

EE

Tra

nsac

tions

on

Sof

twar

e E

ngin

eerin

g 19

91

E

On

the

econ

omic

s of

req

uire

men

ts-b

ased

tes

t cas

e pr

iorit

izat

ion

S

rikan

th a

nd W

illia

ms

ED

SE

R

2005

N

C

On

the

effe

ctiv

enes

s of

cla

ssifi

catio

n tr

ees

for

test

cas

e co

nstr

uctio

n C

hen

and

Poo

n In

form

atio

n and

Sof

twar

e T

echn

olog

y 19

98

B

On

the

effe

ctiv

enes

s of

mut

atio

n an

alys

is a

s a

blac

k bo

x te

stin

g te

chni

que

M

urna

ne a

nd R

eed

AS

WE

C

200

1

E

On

the

iden

tific

atio

n of

cat

egor

ies

and

choi

ces

for

spec

ifica

tion-

base

d te

st c

ase

gene

ratio

n C

hen

et

al.

Info

rmat

ion

and

Sof

twar

e T

echn

olog

y 20

04

B

On

the

inte

grat

ion

of d

esig

n an

d te

st:

a m

odel

-bas

ed

appr

oach

for

em

bedd

ed

syst

ems

P

falle

r et

al.

AS

T

2006

E

On

the

rela

tions

hips

of f

aults

for

Boo

lean

spe

cific

atio

n ba

sed

test

ing

La

u an

d Y

u A

SW

EC

20

01

E

On

the

Sta

te o

f th

e A

rt in

Req

uire

men

ts-b

ased

Val

idat

ion

and

Tes

t of

Sof

twar

e R

yser

et

al.

Tec

hini

cal R

epor

t at U

nive

rsity

of Z

uric

h 19

99

E

On

the

test

ing

met

hods

use

d by

beg

inni

ng s

oftw

are t

este

rs

Yu

et al.

Info

rmat

ion

and

Sof

twar

e T

echn

olog

y 20

04

E

On

the

use

of th

e cl

assi

ficat

ion-

tree

met

hod

by b

egin

ning

sof

twar

e te

ster

s

Yu

et al.

AC

M s

ympo

sium

on

App

lied

com

putin

g 20

03

E

One

eva

luat

ion

of m

odel

-bas

ed t

estin

g an

d its

aut

omat

ion

P

rets

chne

r et al.

ICS

E

2005

E

O

n-lin

e F

ault

Det

ectio

n of

Sen

sor

Mea

sure

men

ts

Kous

hanf

ar e

t al.

Pro

ceed

ings

of I

EE

E S

enso

rs

2003

B

O

nlin

e te

stin

g w

ith m

odel

pro

gram

s V

eane

s et

al.

ES

EC

/FS

E

2005

B

O

ptim

al s

trat

egie

s fo

r te

stin

g no

ndet

erm

inis

tic s

yste

ms

Le

v N

achm

anso

n IS

ST

A

2004

D

P

aram

eter

ized

uni

t tes

ts

Till

man

n an

d S

chul

te

ES

EC

/FS

E-1

3 20

04

B

Pla

nnab

le

test

se

lect

ion

crite

ria

for

FS

Ms

extr

acte

d fr

om

oper

atio

nal

spec

ifica

tions

P

arad

kar

ISS

RE

20

04

D

Pla

y to

tes

t B

lass

et

al.

For

mal

App

roac

hes

to S

oftw

are

Tes

ting

2005

D

Pra

ctic

al

appr

oach

to

sp

ecifi

catio

n an

d co

nfor

man

ce

test

ing

of

dist

ribut

ed

netw

ork

appl

icat

ions

K

ulia

min

et al.

Lect

ure

Not

es in

Com

pute

r S

cien

ce

2005

NC

P

ream

ble

com

puta

tion

in a

utom

ated

tes

t ca

se g

ener

atio

n us

ing

cons

trai

nt lo

gic

prog

ram

min

g

Col

in e

t al.

Sof

twar

e T

estin

g, V

erifi

catio

n &

Rel

iabi

lity

2004

E

Prio

ritiz

ing

JUni

t T

est

Cas

es:

An

Em

piric

al

Ass

essm

ent

and

Cos

t-B

enef

its

Ana

lysi

s

Do

et

al.

Em

piric

al S

oftw

are

Eng

inee

ring

2006

B

Pro

be:

a fo

rmal

spe

cific

atio

n-ba

sed

test

ing

syst

em

Am

ayre

h an

d Z

in

Inte

rnat

iona

l con

fere

nce

on In

form

atio

n S

yste

ms

1999

E

P

rodu

ct f

amily

test

ing:

a s

urve

y

Tev

anlin

na e

t al.

AC

M S

IGS

OF

T S

oftw

are

Eng

inee

ring

Not

es

2004

B

P

roje

cted

sta

te m

achi

ne c

over

age

for

softw

are

test

ing

Frie

dman

et al.

ISS

TA

20

02

E

Pro

pert

y-ba

sed

test

ing:

a n

ew a

ppro

ach

to t

estin

g fo

r as

sura

nce

F

ink

and

Bis

hop

AC

M S

IGS

OF

T S

oftw

are

Eng

inee

ring

Not

es

1997

B

P

roT

est:

An

Aut

omat

ic T

est E

nviro

nmen

t for

B S

peci

ficat

ions

S

atpa

thy

et

al.

Ele

ctro

nic

Not

es in

The

oret

ical

Com

pute

r S

cien

ce

2005

NC

pr

oTE

ST

: an

au

tom

atic

, sc

enar

io-d

riven

, re

quire

men

ts-b

ased

so

ftw

are

test

pr

oces

s

Hirt

C

ON

QU

ES

T

2000

E

Ran

dom

test

ing

of in

terr

upt-

driv

en s

oftw

are

R

egeh

r E

MS

OF

T

2005

NC

R

efin

emen

t in

stat

echa

rt t

estin

g B

ogda

nov

and

Hol

com

be

Sof

twar

e T

estin

g V

erifi

catio

n &

Rel

iabi

lity

2004

D

Reg

ress

ion

test

ing

of c

lass

es b

ased

on

TC

OZ

spe

cific

atio

n

Lian

g

ICE

CC

S

2005

B

R

equi

rem

ent-

base

d au

tom

ated

bla

ck-b

ox t

est

gene

ratio

n T

ahat

et al.

CO

MP

SA

C

2001

B

Req

uire

men

ts t

race

abili

ty i

n au

tom

ated

tes

t ge

nera

tion:

app

licat

ion

to s

mar

t ca

rd s

oftw

are

valid

atio

n

Bou

quet

et al.

A-M

OS

T

2005

B

Req

uire

men

ts-B

ased

Mon

itors

for

Rea

l-Tim

e S

yste

ms

P

eter

s an

d P

arna

s IE

EE

Tra

nsac

tions

on

Sof

twar

e E

ngin

eerin

g 20

02

E

Req

uire

men

ts-d

riven

sof

twar

e te

st:

a pr

oces

s-or

ient

ed a

ppro

ach

R

amac

hand

ran

A

CM

SIG

SO

FT

Sof

twar

e E

ngin

eerin

g N

otes

19

96

NC

R

eusi

ng c

lass

-bas

ed t

est

case

s fo

r te

stin

g ob

ject

-orie

nted

fra

mew

ork

inte

rfac

e cl

asse

s D

alla

l and

Sor

enso

n Jo

urna

l of S

oftw

are

Mai

nten

ance

and

Evo

lutio

n:

Res

earc

h an

d P

ract

ice

2005

E

Rev

erse

eng

inee

ring

of te

st c

ases

for

sel

ectiv

e re

gres

sion

test

ing

S

need

E

urop

ean

Con

fere

nce

on S

oftw

are

Mai

nten

ance

an

d R

eeng

inee

ring

2004

C

Rev

isiti

ng S

trat

egie

s fo

r O

rder

ing

Cla

ss I

nteg

ratio

n T

estin

g in

the

Pre

senc

e of

B

riand

et al.

Tec

hnic

al R

epor

t of

Car

leto

n U

nive

rsity

20

02

Page 98: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

98

Dep

ende

ncy

Cyc

les

– A

n In

vest

igat

ion

of G

raph

-bas

ed C

lass

Int

egra

tion

Tes

t O

rder

Str

ateg

ies

D

SA

LT -

An

Inte

grat

ed E

nviro

nmen

t to

Aut

omat

e G

ener

atio

n of

Fun

ctio

n T

ests

fo

r A

PIs

P

arad

kar

ISS

RE

20

00

NC

S

cena

rio-b

ased

sys

tem

tes

t with

sof

twar

e-pr

oduc

t fa

mili

es

Reu

ys e

t al.

Info

rmat

ik F

orsc

hung

und

Ent

wic

klun

g 20

05

E

Sel

ectin

g sm

all y

et e

ffect

ive

set

of te

st d

ata

Par

adka

r IA

ST

ED

Int

erna

tiona

l Mul

ti-C

onfe

renc

e on

A

pplie

d In

form

atic

s 20

03

E

Sep

arat

ing

sequ

ence

ove

rlap

for

auto

mat

ed te

st s

equ

ence

gen

erat

ion

H

iero

ns

AS

E

2006

NC

S

eque

ncin

g co

nstr

aint

s-ba

sed

regr

essi

on t

estin

g of

con

curr

ent

prog

ram

s af

ter

spec

ifica

tion

chan

ges

K

im e

t al.

Jour

nal o

f KIS

S: S

oftw

are

and

App

licat

ions

20

00

NC

S

iddh

arth

a -

auto

mat

ed t

est d

river

-ora

cle

synt

hesi

s

Ric

hard

son

AC

M S

IGS

OF

T S

oftw

are

Eng

inee

ring

Note

s 20

00

E

Sid

dhar

tha:

a m

etho

d fo

r de

velo

ping

dom

ain-

spec

ific

test

driv

er g

ener

ator

s

Rey

es a

nd R

icha

rdso

n A

SE

19

99

D

Sof

twar

e A

rchi

tect

ure

Ana

lysi

s B

ased

on

Sta

tech

art S

eman

tics

D

ias

and

Vie

ira

Inte

rnat

iona

l Wor

ksho

p on

Sof

twar

e S

peci

ficat

ion

and

Des

ign

2000

E

Sof

twar

e as

sura

nce

by b

ound

ed e

xha

ustiv

e te

stin

g S

ulliv

an e

t al.

ISS

TA

20

04

E

Sof

twar

e F

ault

Tol

eran

ce: A

Tut

oria

l W

ilfre

do

NA

SA

Lan

gley

Tec

hnic

al R

epor

t Ser

ver

2000

E

S

oftw

are

in M

etro

logy

R

ICH

TE

R

PT

B-M

ITT

EIL

UN

GE

N

1991

E

S

oftw

are

mod

el c

heck

ing

in p

ract

ice:

an

indu

stria

l cas

e st

udy

C

hand

ra e

t al.

ICS

E

2002

N

C

Sof

twar

e re

quire

men

ts a

nd a

ccep

tanc

e te

stin

g

Hsi

a et al.

Ann

als

of S

oftw

are

Eng

inee

ring

1997

E

S

oftw

are

requ

irem

ents

val

idat

ion

via

task

ana

lysi

s Z

hu e

t al.

Jour

nal o

f Sys

tem

s an

d S

oftw

are

2002

E

S

oftw

are

test

sel

ectio

n pa

ttern

s an

d el

usiv

e bu

gs

How

den

C

OM

PS

AC

20

05

D

Sof

twar

e te

stin

g at

the

arch

itect

ural

leve

l R

icha

rdso

n an

d W

olf

ISA

W-2

and

Vie

wpo

ints

19

96

E

Sof

twar

e un

it te

st c

over

age

and

adeq

uacy

Z

hu e

t al.

AC

M C

ompu

ting

Sur

veys

19

97

B

Spe

cific

atio

n ba

sed

test

seq

uenc

e ge

nera

tion

with

pro

posi

tiona

l log

ic

Wim

mel

et al.

Sof

twar

e T

estin

g V

erifi

catio

n an

d R

elia

bilit

y 20

00

NC

S

peci

ficat

ion

test

ing

of s

ynch

rono

us s

oftw

are

P

aris

sis

T

echn

ique

et S

cien

ce In

form

atiq

ues

2002

D

S

peci

ficat

ion-

base

d cl

ass

test

ing

with

Cla

ssB

ench

M

urra

y et

al.

Asi

a P

acifi

c S

oftw

are

Eng

inee

ring

Con

fere

nce

1998

A

S

peci

ficat

ion-

base

d re

gres

sion

tes

t sel

ectio

n w

ith r

isk

anal

ysis

C

hen

et

al.

CA

SC

ON

2002

B

Spe

cific

atio

n-B

ased

T

est

Gen

erat

ion

for

Sec

urity

-Critic

al

Sys

tem

s U

sing

M

utat

ions

W

imm

el a

nd J

ürje

ns

Inte

rnat

iona

l Con

fere

nce

on F

orm

al M

etho

ds a

nd

Sof

twar

e E

ngin

eerin

g 20

02

D

Spe

cific

atio

n-ba

sed

test

ora

cles

for

reac

tive

syst

ems

Ric

hard

son

et

al.

ICS

E

1992

NC

S

peci

ficat

ion-

base

d te

stin

g fo

r G

UI-

base

d ap

plic

atio

ns

Che

n an

d S

ubra

man

iam

S

oftw

are

Qua

lity

Con

trol

20

02

B

Spe

cific

atio

n-ba

sed

test

ing

for

real

-tim

e av

ioni

c sy

stem

s

Bib

erst

ein

and

Fitz

gera

ld

IEE

Col

loqu

ium

on

App

licab

le M

odel

ling,

V

erifi

catio

n an

d A

naly

sis

Tec

hniq

ues

for

Rea

l-T

ime

Sys

tem

s 19

99

B

Spe

cific

atio

n-ba

sed

test

ing

for

real

-tim

e re

activ

e sy

stem

s

Ala

gar

et a

l. T

OO

LS

2000

E

S

peci

ficat

ion-

base

d te

stin

g of

con

curr

ent

prog

ram

s

Car

ver

A

CM

SIG

SO

FT

Sof

twar

e E

ngin

eerin

g N

otes

20

00

NC

S

peci

ficat

ion-

base

d te

stin

g of

con

curr

ent s

yste

ms

U

lrich

and

Kon

ig

For

mal

Des

crip

tion

Tec

hniq

ues

and

Pro

toco

l S

peci

ficat

ion,

Tes

ting

and

Ver

ifica

tion

1998

B

Spe

cific

atio

n-ba

sed

test

ing

of r

eact

ive

soft

war

e: a

cas

e st

udy

in t

echn

olog

y tr

ansf

er

Jaga

dees

an e

t al.

Jour

nal o

f Sys

tem

s an

d S

oftw

are

1998

B

Spe

cific

atio

n-ba

sed

test

ing

of

reac

tive

softw

are:

to

ols

and

exp

erim

ents

-ex

per

ienc

e re

port

Ja

gade

esan

et

al.

ICS

E

1997

B

Spe

cific

atio

n-ba

sed

test

ing

of s

ynch

rono

us s

oftw

are

P

aris

sis

and

Oua

bdes

sela

m

Sym

posi

um o

n th

e F

ound

atio

ns o

f Sof

twar

e E

ngin

eerin

g 19

96

NC

S

peci

ficat

ion-

base

d te

stin

g of

use

r in

terf

aces

P

aiva

et al.

Inte

ract

ive

Sys

tem

s, D

esig

n, S

peci

ficat

ion,

and

V

erifi

catio

n

2003

B

Spe

cific

atio

n-ba

sed

test

ing

usin

g ca

use-

effe

ct g

raph

s

Par

adka

r et al.

Ann

als

of S

oftw

are

Eng

inee

ring

1997

B

S

peci

ficat

ion-

base

d te

stin

g w

ith li

near

tem

pora

l log

ic

Tan

et

al.

IRI

2004

Page 99: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

99

D

Spe

cify

ing

and

Tes

ting

Sof

twar

e C

ompo

nent

s us

ing

AD

L H

ayes

and

San

kar

Tec

hini

cal R

epor

t at S

un M

icro

syst

ems

1994

D

S

tate

-bas

ed in

crem

enta

l tes

ting

of a

spec

t-or

ient

ed p

rogr

ams

X

u an

d X

u A

OS

D

2006

D

S

tate

-bas

ed t

estin

g of

inte

grat

ion

aspe

cts

Xu

and

Xu

WT

AO

P

2006

B

Sta

tech

art t

estin

g m

etho

d fo

r ai

rcra

ft c

ontr

ol s

yste

ms

Bog

dano

v an

d H

olco

mbe

S

oftw

are

Tes

ting

Ver

ifica

tion

& R

elia

bilit

y 20

01

E

Sta

tic d

river

ver

ifier

, a

form

al v

erifi

catio

n to

ol f

or W

indo

ws

devi

ce d

river

s

Levi

n M

EM

OC

OD

E

2004

E

S

tatu

s re

port

: re

quire

men

ts e

ngin

eerin

g H

sia

et al.

IEE

E S

oftw

are

1993

B

S

trat

egie

s fo

r au

tom

ated

spe

cific

atio

n-ba

sed

test

ing

of s

ynch

rono

us s

oftw

are

P

aris

sis

and

Vas

sy

AS

E

2001

E

S

truc

tura

l cov

erag

e an

alys

is m

etho

d G

iffor

d A

IAA

/IEE

E D

igita

l Avi

onic

s S

yste

ms

Con

fere

nce

1996

D

S

truc

tura

l spe

cific

atio

n-ba

sed

test

ing

with

AD

L

Cha

ng e

t al.

ISS

TA

19

96

D

Str

uctu

ral

spec

ifica

tion-

base

d te

stin

g:

auto

mat

ed

supp

ort

and

exp

erim

enta

l ev

alua

tion

Cha

ng a

nd R

icha

rdso

n E

SE

C

1999

NC

S

tudy

on

softw

are

test

met

hod

base

d on

fini

te s

tate

mac

hine

La

n et

al.

Jour

nal o

f Nor

th C

hina

Ele

ctric

Pow

er U

nive

rsity

20

05

E

Sup

port

ing

Con

trol

led

Exp

erim

enta

tion

with

T

estin

g T

echn

ique

s:

An

Infr

astr

uctu

re a

nd it

s P

oten

tial I

mpa

ct

Do

et

al.

Em

piric

al S

oftw

are

Eng

inee

ring

2005

NC

S

yste

mat

ic M

odel

-Bas

ed T

estin

g of

Em

bedd

ed A

utom

otiv

e S

oftw

are

Con

rad

et

al.

Ele

ctro

nic

Not

es in

The

oret

ical

Com

pute

r S

cien

ce

2005

B

Sys

tem

atic

m

odel

-bas

ed

test

ing

of

embe

dded

co

ntro

l so

ftw

are:

th

e M

B3 /T

ap

proa

ch

Con

rad

et

al.

ICS

E

2004

NC

S

yste

mat

ic te

stin

g as

bas

e fo

r se

curin

g qu

ality

S

chla

tter

In

form

atio

n M

anag

emen

t an

d C

onsu

lting

20

06

B

T

elec

omm

unic

atio

n so

ftwar

e va

lidat

ion

usin

g a

synch

rono

us a

ppro

ach

du

Bou

sque

t et al.

A

SS

ET

19

98

B

Tes

t Cas

e G

ener

atio

n as

an

AI P

lann

ing

Pro

blem

H

owe

et

al.

AS

E

1997

C

T

est c

ases

gen

erat

ion

from

UM

L st

ate

diag

ram

s K

im e

t al.

IEE

Sof

twar

e 19

99

D

Tes

t inp

ut g

ener

atio

n fo

r ja

va c

onta

iner

s us

ing

stat

e m

atch

ing

V

isse

r et

al.

ISS

TA

20

06

D

Tes

t inp

ut g

ener

atio

n w

ith ja

va P

athF

inde

r

Vis

ser

et

al.

ISS

TA

20

04

E

Tes

t prio

ritiz

atio

n fo

r pa

irwis

e in

tera

ctio

n co

vera

ge

Bry

ce a

nd C

olbo

urn

A-M

OS

T

2005

A

T

est r

eady

UM

L st

atec

hart

mod

els

Mur

thy

et

al.

SC

ES

M

2006

D

Tes

t sel

ectio

n fo

r ob

ject

-orie

nted

sof

twar

e ba

sed o

n fo

rmal

spe

cific

atio

ns

Per

aire

et

al.

Inte

rnat

iona

l Con

fere

nce

on P

rogr

amm

ing

Con

cept

s an

d M

etho

ds

1998

A

Tes

t sel

ectio

n fr

om U

ML

Sta

tech

arts

Li

uyin

g an

d Z

hich

ang

TO

OLS

19

99

NC

T

est s

peci

ficat

ion

base

d on

tra

ce d

escr

iptio

n P

etre

nko

Pro

gram

min

g an

d C

ompu

ter

Sof

twar

e 19

93

D

Tes

t tem

plat

e fr

amew

ork:

a s

peci

ficat

ion-

base

d te

stin

g ca

se s

tudy

S

tock

s an

d C

arrin

gton

IS

ST

A

1993

D

T

est t

empl

ates

: a s

peci

ficat

ion-

base

d te

stin

g fr

amew

ork

S

tock

s an

d C

arrin

gton

IC

SE

19

93

B

Tes

t-ba

sed

mod

el g

ener

atio

n fo

r le

gacy

sys

tem

s

Hun

gar

et

al.

ITC

20

03

D

Tes

tEra

: Spe

cific

atio

n-ba

sed

test

ing

of ja

va p

rogra

ms

usin

g S

AT

K

hurs

hid

and

Mar

inov

A

SE

20

04

E

Tes

ting

agai

nst

som

e ev

entu

ality

pro

pert

ies

of s

ynch

rono

us s

oftw

are:

A c

ase

stud

y du

Bou

sque

t et al.

Ele

ctro

nic

Not

es in

The

oret

ical

Com

pute

r S

cien

ce

2004

A

Tes

ting

agile

req

uire

men

ts m

odel

s B

otas

chan

jan

et

al.

Jour

nal o

f Zhe

jiang

Uni

vers

ity S

cien

ce

2004

B

Tes

ting

of c

oncu

rren

t pr

ogra

ms

base

d on

mes

sage

seq

uenc

e ch

arts

C

hung

et

al.

Inte

rnat

iona

l Sym

posi

um o

n S

oftw

are

Eng

inee

ring

for

Par

alle

l and

Dis

trib

uted

Sys

tem

s 19

99

E

Tes

ting

poly

mor

phic

inte

ract

ions

in U

ML

sequ

ence

dia

gram

s

Sup

avita

and

S

uwan

nasa

rt

ITC

C

2005

B

Tes

ting

real

-tim

e em

bedd

ed s

oftw

are

usin

g U

PP

AA

L-T

RO

N:

an in

dust

rial c

ase

stud

y

Lars

en e

t al.

EM

SO

FT

20

05

E

Tes

ting

soft

war

e m

odel

ling

tool

s us

ing

data

mut

atio

n

Sha

n A

ST

20

06

B

Tes

ting

times

: O

n M

odel

-Driv

en T

est

Gen

erat

ion

for

Non

-Det

erm

inis

tic R

eal-

Tim

e S

yste

ms

B

rinks

ma,

E.

AC

SD

20

04

B

Tes

ting

web

app

licat

ions

by

mod

elin

g w

ith F

SM

s A

ndre

ws

et

al.

Sys

tem

s an

d M

odel

ing

2005

Page 100: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

100

E

Tes

ting

with

mod

el c

heck

er:

insu

ring

faul

t vis

ibili

ty

Oku

n et

al.

WS

EA

S T

rans

actio

ns o

n S

yste

ms

2003

B

Tes

t-su

ite

redu

ctio

n fo

r m

odel

ba

sed

test

s:

effe

cts

on

test

qu

ality

an

d im

plic

atio

ns fo

r te

stin

g

Hei

mda

hl a

nd G

eorg

e A

SE

20

04

C

Tes

tUm

l: us

er-m

etric

s dr

iven

web

app

licat

ions

test

ing

Bel

letti

ni e

t al.

SA

C

2005

E

T

he (

Im)m

atur

ity le

vel o

f sof

twar

e te

stin

g

Ber

tolin

o

AC

M S

IGS

OF

T S

oftw

are

Eng

inee

ring

Not

es

200

4

E

The

adv

ance

d m

obile

app

licat

ion

test

ing

envi

ronm

ent

B

inde

r an

d H

anlo

n A

-MO

ST

20

05

A

The

AG

ED

IS to

ols

for

mod

el b

ased

tes

ting

Har

tman

and

Nag

in

ISS

TA

20

04

E

The

AS

TO

OT

app

roac

h to

test

ing

obje

ct-o

rient

ed p

rogr

ams

D

oong

and

Fra

nkl

TO

SE

M

1994

E

T

he e

ffec

t of

cod

e co

vera

ge o

n fa

ult

dete

ctio

n un

der

diffe

rent

test

ing

prof

iles

C

ai a

nd L

yu

A-M

OS

T

2005

E

The

sem

antic

s of

triv

eni:

A p

roce

ss-a

lgeb

raic

AP

I fo

r th

read

s +

eve

nts

Col

by e

t al.

Ele

ctro

nic

Not

es in

The

oret

ical

Com

pute

r S

cien

ce

1998

E

The

spe

cific

atio

n-ba

sed

test

ing

of a

tru

sted

ker

nel

: M

K+

+

For

d et

al.

Inte

rnat

iona

l Con

fere

nce

Con

fere

nce

on F

orm

al

Eng

inee

ring

Met

hods

19

97

E

The

sta

te p

robl

em fo

r te

st g

ener

atio

n in

Sim

ulin

k

Zha

n an

d C

lark

G

EC

CO

20

06

E

The

Uni

Tes

K A

ppro

ach

to D

esig

ning

Tes

t Sui

tes

K

ulia

min

et al.

P

rogr

amm

ing

and

Com

putin

g S

oftw

are

,

20

03

E

The

use

of m

odel

-bas

ed te

st r

equi

rem

ents

thr

ough

out

the

prod

uct

life

cycl

e B

ukat

a et al.

Aer

ospa

ce a

nd E

lect

roni

c S

yste

ms

Mag

azin

e 20

00

B

Tho

roug

hnes

s of

spe

cific

atio

n-ba

sed

test

ing

of s

ynch

rono

us p

rogr

ams

P

aris

sis

and

Vas

sy

ISS

RE

20

03

D

Tin

Man

- A

tes

t de

rivat

ion

and

man

agem

ent

tool

for

spe

cific

atio

n-ba

sed

clas

s te

stin

g

Mur

ray

et

al.

TO

OLS

19

99

E

Too

dar

ned

big

to t

est

Sto

bie

Que

ue

2005

E

T

ools

for

mod

el-b

ased

sec

urity

eng

inee

ring

rjens

and

Fox

IC

SE

20

06

B

Tow

ards

a m

ore

effic

ient

way

of

gene

ratin

g te

st c

ases

: cla

ss g

raph

s

Leun

g an

d W

ong

Asi

a-P

acifi

c C

onfe

renc

e on

Qua

lity

Sof

twar

e 20

00

C

Tow

ards

Aut

omat

ed S

uppo

rt f

or D

eriv

ing

Tes

t Dat

a fr

om U

ML

Sta

tech

arts

B

riand

et al.

Tec

hnic

al R

epor

t of

Car

leto

n U

nive

rsity

20

04

B

Tow

ards

inte

grat

ion

of u

se c

ase

mod

ellin

g an

d us

age

-bas

ed t

estin

g

Reg

nell

et

al.

Jour

nal o

f Sys

tem

s an

d S

oftw

are

2000

B

Tow

ards

m

odel

-bas

ed

gene

ratio

n of

se

lf-pr

imin

g an

d se

lf-ch

ecki

ng

conf

orm

ance

test

s fo

r in

tera

ctiv

e sy

stem

s

Par

adka

r In

form

atio

n an

d S

oftw

are

Tec

hnol

ogy

2004

E

Tow

ards

mod

el-d

riven

tes

ting

of a

Web

app

licat

ion g

ener

ator

B

ares

i et

al.

Lect

ure

Not

es in

Com

pute

r S

cien

ce

2005

E

T

race

abili

ty t

echn

ique

s: U

sing

sce

nario

s to

sup

port

tra

ceab

ility

N

asla

vsky

et al.

TE

FS

E

2005

C

T

raff

ic-a

war

e st

ress

test

ing

of d

istr

ibut

ed s

yste

ms

base

d on

UM

L m

odel

s G

arou

si e

t al.

ICS

E

2006

E

T

rans

latin

g te

st p

rogr

ams

usin

g a

mod

el-b

ased

pro

cess

B

ears

e an

d Ly

nch

AU

TO

TE

ST

CO

N

1999

B

T

-UP

PA

AL:

onl

ine

mod

el-b

ased

test

ing

of r

eal-t

ime

syst

ems

M

ikuc

ioni

s et

al.

AS

E

2004

B

T

-VE

C: a

tool

for

deve

lopi

ng c

ritic

al s

yste

ms

Bla

ckbu

rn a

nd B

usse

r C

OM

PA

SS

19

96

B

T-V

EC

TM

pro

duct

sum

mar

y

W

orks

hop

on I

ndus

tria

l Str

engt

h F

orm

al

Spe

cific

atio

n T

echn

ique

s 19

98

NC

U

ML

base

d st

atis

tical

tes

ting

acce

lera

tion

of d

istr

ibut

ed s

afet

y-cr

itica

l sof

twar

e

Yan

et al.

ISP

A

2004

C

U

ML-

base

d in

tegr

atio

n te

stin

g H

artm

ann

et al.

ISS

TA

20

00

C

UM

L-B

ased

Int

egra

tion

Tes

ting

for

Com

pone

nt-B

ased

Sof

twar

e W

u et

al.

ICC

BS

S

2003

A

U

ML-

base

d st

atis

tical

test

cas

e ge

nera

tion

Rie

bisc

h et al.

Net

Obj

ectD

ays

2002

E

U

nifo

rm d

escr

iptio

ns f

or m

odel

bas

ed t

estin

g K

rishn

an

AS

WE

C

2004

E

U

sage

mod

el-b

ased

aut

omat

ed te

stin

g of

C+

+ t

empl

ate

s

Say

re

A-M

OS

T

2005

B

U

se C

ase-

base

d T

estin

g of

Pro

duct

Lin

es

Ber

tolin

o an

d G

nesi

E

SE

C a

nd F

SE

-11

2003

E

Use

of

seq

uenc

ing

cons

trai

nts

for

spec

ifica

tion-

base

d te

stin

g of

co

ncur

rent

pr

ogra

ms

Car

ver

and

Tai

IE

EE

Tra

nsac

tions

on

Sof

twar

e E

ngin

eer

ing

1998

B

Use

-cas

e dr

iven

tes

t for

obj

ect-

orie

nted

sys

tem

C

hoi

Sof

twar

e E

ngin

eerin

g an

d A

pplic

atio

ns

2001

E

Usi

ng a

mod

el c

heck

er to

test

saf

ety

prop

ertie

s

Am

man

n et al.

IEE

E I

nter

natio

nal C

onfe

renc

e on

Eng

inee

ring

of

Com

plex

Com

pute

r S

yste

ms

2001

B

Usi

ng a

rtifi

cial

life

tec

hniq

ues

to g

ener

ate

test

cas

es f

or c

ombi

nato

rial t

estin

g S

hiba

et al.

Inte

rnat

iona

l Com

pute

r S

oftw

are

and

2004

Page 101: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

101

App

licat

ions

Con

fere

nce

B

Usi

ng f

orm

al m

etho

ds t

o de

rive

test

fra

mes

in c

ateg

ory-

part

ition

test

ing

Am

man

n an

d O

ffut

t C

OM

PA

SS

19

94

B

Usi

ng f

orm

al s

peci

ficat

ions

as

test

ora

cles

for

syst

em-c

ritic

al s

oftw

are

H

agar

and

Bie

man

A

CM

SIG

Ada

Ada

Let

ters

19

96

E

Usi

ng in

form

atio

n ab

out f

unct

ions

in s

elec

ting

test

cas

es

Cle

rmon

t an

d P

arna

s A

-MO

ST

20

05

B

Usi

ng m

odel

che

ckin

g to

gen

erat

e te

sts

from

req

uire

men

ts s

peci

ficat

ions

G

arga

ntin

i and

Hei

tmey

er

ES

EC

/FS

E-7

19

99

E

Usi

ng m

odel

-bas

ed te

st p

rogr

am g

ener

ator

for

sim

ula

tion

valid

atio

n Z

hang

et

al.

Lect

ure

Not

es in

Com

pute

r S

cien

ce

2005

E

Usi

ng S

imul

atio

n to

Em

piric

ally

Inv

estig

ate

Tes

t C

ove

rage

Crit

eria

Bas

ed o

n S

tate

char

t B

riand

et al.

ICS

E

2004

NC

U

sing

sta

te d

iagr

ams

to g

ener

ate

unit

test

s fo

r ob

ject

-orie

nted

sys

tem

s Ip

ate

and

Hol

com

be

Lect

ure

Not

es in

Com

pute

r S

cien

ce

2005

E

U

sing

Tes

t Ora

cles

Gen

erat

ed f

rom

Pro

gram

Doc

umen

tatio

n P

eter

s an

d P

arna

s IE

EE

Tra

nsac

tions

on

Sof

twar

e E

ngin

eerin

g 19

98

E

Usi

ng t

he in

crem

enta

l app

roac

h to

gen

erat

e te

st s

ets:

a c

ase

stud

y

Yu

et al.

QS

IC

2003

C

U

sing

UM

L C

olla

bora

tion

Dia

gram

s fo

r S

tatic

Che

ckin

g an

d T

est

Gen

erat

ion

Abd

uraz

ik a

nd O

ffut

t In

tern

atio

nal C

onfe

renc

e of

UM

L 20

00

C

Usi

ng U

ML

for

Aut

omat

ic T

est

Gen

erat

ion

Cric

hton

et

al.

AS

E

2001

B

U

sing

Z s

peci

ficat

ions

in c

ateg

ory

part

ition

test

ing

A

mla

and

Am

man

n

CO

MP

AS

S

1992

E

Val

idat

ion

in m

odel

-driv

en e

ngin

eerin

g: t

estin

g m

odel

tra

nsfo

rmat

ions

F

leur

ey e

t al.

Inte

rnat

iona

l Wor

ksho

p on

Mod

el, D

esig

n an

d V

alid

atio

n 20

04

E

Val

idat

ion

test

of d

istr

ibut

ed p

rogr

am b

ased

on

eve

nt s

eque

ncin

g co

nstr

aint

s

Qin

g et al.

Jour

nal o

f Sof

twar

e 20

00

A

Ver

ifica

tion

of r

equi

rem

ents

for

saf

ety-

criti

cal s

oftw

are

Car

pent

er

SIG

Ada

1999

B

Whi

te

on

blac

k:

a w

hite

-bo

x-or

ient

ed

appr

oach

fo

r se

lect

ing

blac

k bo

x-ge

nera

ted

test

cas

es

Che

n et

al.

Asi

a-P

acifi

c C

onfe

renc

e on

Qua

lity

Sof

twar

e 20

00

Page 102: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

102

Appendix B – Model-based Testing Approaches Classification

Approach

Publication

Year

Category

Testing

Level

Software

Domain

Technology

/Language

Behavior Models

Tools to

support

Automated

Steps

Complexity of not-automated

steps

Intermediate

Model

Abdura

zik

2000

2000

C

Sys

tem

T

est

ing

OO

U

ML

Colla

bora

tion D

iagra

m

Not-

def

ined

"2/3

" 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Ali2

005

2005

C

Inte

gra

tion

Test

ing

OO

U

ML

and

SC

OT

EM

Sta

te D

iagra

m,

Colla

bora

tion

Dia

gra

m, an

d

SC

OT

EM

SC

OT

EM

C

onst

ruct

or,

T

est

Path

G

enera

tor,

Tes

t E

xecu

tor,

and

Resu

lts

Eva

luat

or

"3/4

" 0. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

SC

OT

EM

Am

man

n199

4

1994

B

Sys

tem

T

est

ing

Not defin

ed

Z N

ota

tion,

TS

L

Z S

pec

ifica

tion

Not

Def

ined

"1/5

" A

ll st

eps

can

be d

efin

ed

toget

her

by

the c

omple

xity

: N

eces

sary

(I

niti

al B

ehav

ior

Model

Mod

elin

g)

No

Andre

ws2

005

2005

B

Sys

tem

T

est

ing

Web

A

pplic

atio

n

Fin

ite S

tate

M

achin

e

Fin

ite S

tate

Mach

ine

One too

l is

defin

ed

“3/8

1.1

Hig

h (p

artit

ionin

g in

to

clus

ters

) 1.3

Mediu

m (

Inte

rmed

iate

Model

Mode

ling

or

Tes

t D

ata

Defin

ition)

1.4

Mediu

m (

Inte

rmed

iate

Model

Mode

ling

or

Tes

t D

ata

Defin

ition)

No

Barb

ey1

996

1996

B

Unit

Test

ing

O

O

CO

-OP

N/2

Concu

rrent

O

bjec

t-O

rient

ed

Pet

ri N

ets

CO

-OP

N/2

LO

FT

“3

/4”

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Basa

nie

ri 2000

2000

C

Inte

gra

tion

Test

ing

OO

U

ML

Use

Case

Dia

gra

m,

Sequenc

e D

iagra

m,

and C

lass

Dia

gra

m

Not-

def

ined

“3/7

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

2. H

igh

(Ana

lysi

s of

the

B

ehav

ior

Mode

l)

4. M

ediu

m (

Tes

t Mode

l M

ode

ling)

5. H

igh

(Tra

nsla

tion o

r oth

er

task

)

No

Belle

ttin

i2005

2005

C

Sys

tem

T

est

ing

Web

A

pplic

atio

n

UM

L

Ext

ende

d C

lass

Dia

gra

m,

Sta

tech

art

Dia

gra

ms

Test

UM

L

“4/6

0. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

1. Low

(O

nly

choic

es)

4. Low

(O

nly

choic

es)

No

Bern

ard

2006

2006

A

Sys

tem

T

est

ing

Not defin

ed

UM

L 2.

0

Cla

ss D

iagra

m, O

bjec

t D

iagra

m, S

tate

char

t D

iagra

m, O

CL

LE

IRIO

S T

est

Genera

tor

“4/5

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Bert

olin

o2003

2003

C

Inte

gra

tion

Test

ing

Com

ponent

-base

d S

oftw

are

UM

L C

om

ponent

s

Use

Case

Dia

gra

m,

Sequenc

e D

iagra

m,

Cla

sse

Dia

gra

m

Co

w_S

uite

and

CD

T

“2/3

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Page 103: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

103

Bert

olin

o2005

2005

C

Inte

gra

tion

Test

ing

OO

U

ML

Sequenc

e D

iagra

m,

Sta

te D

iagra

m,

Reaso

nably

Com

ple

te

Mode

l

Co

w_S

uite

“2

/4”

0. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

1. H

igh

(Tra

nsla

tion o

r oth

er

task

) 3. H

igh

(Tra

nsla

tion o

r oth

er

task

)

SE

Q’’

Mode

l, and S

QR

rc

Mode

l

Beye

r2003

2003

C

Inte

gra

tion

Test

ing

Not defin

ed

UM

L

Annot

ated

Sequ

enc

e D

iagra

m a

nd

MC

UM

M

aT

eLo

“2

/3”

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

MC

UM

(M

arko

v C

hain

U

sage

Mode

l)

Bota

schan

jan

2004

2004

A

Sys

tem

T

est

ing

OO

U

ML

Obj

ect D

iagr

am,

Sequenc

e D

iagra

m

and O

CL

Not-

def

ined

“0/3

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

2,3

. M

ediu

m (

Inte

rmedia

te M

ode

l M

ode

ling)

No

Bous

quet

1999a

1999

D

Sys

tem

T

est

ing

Syn

chro

no

us

React

ive

Softw

are

LU

ST

RE

E

nviro

nment

desc

riptio

n

Lute

nes

s “3

/5”

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

4. Low

(O

nly

choic

es)

No

Briand2

002

2002

A

Sys

tem

T

est

ing

OO

U

ML

Use

Case

Dia

gra

m +

A

ctiv

ity D

iagra

m,

Sequenc

e D

iagra

m,

Colla

bora

tion

Dia

gra

m, C

lass

D

iagra

m, D

ata

Dic

tionary

(O

CL)

TO

TE

M

“8/9

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Briand2

002a

2002

A

Regre

ssio

n

Test

ing

OO

U

ML

Cla

ss D

iagra

m,

Use

ca

se D

iagra

m a

nd

Sequenc

e D

iagra

ms

RT

ST

ool

“2/3

” 3. Low

(O

nly

choic

es)

No

Briand2

002b

2002

C

Inte

gra

tion

Test

ing

OO

U

ML

and

Gra

ph

Cla

ss D

iagra

m a

nd

Gra

ph

Not-

def

ined

“2/2

” 0. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

Gra

ph

Briand2

004a

2004

C

Sys

tem

T

est

ing

OO

U

ML

and

TT

S

Sta

tech

art

Dia

gra

m +

O

CL, C

lass

Dia

gra

m,

Tra

nsi

tion T

est

Sequenc

e

CB

CD

Tool

“3/3

” 0. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Briand2

004b

2004

C

Sys

tem

T

est

ing

OO

U

ML

and

EA

FG

Sta

te D

iagra

m +

OC

L,

event

/act

ion

flow

gra

ph

Not-

def

ined

“3/4

” 2. H

igh

(Tra

nsla

tion o

r oth

er

task

) E

vent/ac

tion

flow

gra

ph

Briand2

006

2006

C

Unit

Test

ing

C

OT

S

UM

L

Cla

ss D

iagra

m,

Sequenc

e D

iagra

m +

O

CL, C

SP

E

cons

train

ts, gra

ph

Pre

stoS

eque

nce

“3/7

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

2. M

ediu

m (

Inte

rmedia

te M

odel

Mode

ling

or

Tes

t D

ata

Defin

ition)

3. Low

(O

nly

choic

es)

4. Low

(O

nly

choic

es)

Carp

ent

er

1999

1999

A

Sys

tem

T

est

ing

Safe

ty-

criti

cal

Sys

tem

U

ML

Use

cas

e D

iagra

m,

Sequenc

e D

iagra

m

Not-

def

ined

“2/5

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

3. M

ediu

m (

Inte

rmedia

te M

odel

Mode

ling

or

Tes

t D

ata

Defin

ition)

5. H

igh

(Tra

nsla

tion o

r oth

er

task

) [C

onv

ert

ing tes

t cas

es in

te

st s

crip

ts, def

initi

on

of v

alue

clas

ses

for

eac

h fie

ld]

No

Page 104: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

104

Cav

arr

a2003

2003

A

Sys

tem

T

est

ing

OO

UM

L and

Abst

ract

S

tate

M

achin

e

Cla

ss D

iagra

m, S

tate

D

iagra

m, O

bjec

t D

iagra

ms,

In

term

edia

te F

orm

at

(AS

M)

AG

ED

IS

“2/3

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

Inte

rmedia

te

Form

at

Chang199

9

1999

D

Unit

Test

ing

N

ot defin

ed

Ass

ert

ion

Defin

ition

Languag

e, C

AD

L S

pec

ifica

tion a

nd

Test

Dat

a D

escr

iptio

n (T

DD

) A

DLsc

ope

“3

/5”

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

2. M

ediu

m (

test

dat

a and

B

oole

an c

ond

ition

modelin

g)

No

Chen2002

2002

A

Regre

ssio

n

Test

ing

OO

U

ML

Act

ivity

Dia

gra

m

Not-

def

ined

“2/3

(1

st

optio

n);

0/2

(2

nd

optio

n)”

Both

: 0.

Nec

essa

ry (

Initi

al

Behav

ior

Mode

l Modelin

g)

Targ

ete

d T

est

ing:

3. Low

(O

nly

ch

oic

es)

Safe

ty T

estin

g: 1

,2. Lo

w (

Only

ch

oic

es)

No

Chen2005

2005

D

Inte

gra

tion

Test

ing

Not defin

ed

SO

FL

Languag

e

Conditi

on

Data

Flo

w

Dia

gra

m –

CD

FD

(S

OF

L)

Not

Def

ined

“3/4

0. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

4. M

ediu

m (

Tes

t exe

cutio

n a

nd

analy

sis)

No

Chev

alle

y 2001

2001

A

Sys

tem

T

est

ing

Critic

al

Softw

are

U

ML

Sta

te D

iagra

m

Rose

RealT

ime

tool

“3/5

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g –

UM

L S

tate

D

iagra

m)

2. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g –

Inpu

t V

ariable

s)

No

Crich

ton20

01

2001

C

Sys

tem

T

est

ing

OO

U

ML

and

Inte

rmedia

te

Form

at

Sys

tem

Mod

el (

Cla

ss,

Obj

ect and

Sta

te

Dia

gra

m)

and

Tes

t D

irect

ives

(O

bjec

t and

Sta

te D

iagra

ms)

, In

term

edia

te F

orm

at

Not-

def

ined

“2/3

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

Inte

rmedia

te

Form

at

Dala

l1999

1999

D

Unit

Test

ing

N

ot defin

ed

AE

TG

Spe

c R

equirem

ents

desc

ribed

in

AE

TG

Spe

c A

ET

G

“2/3

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Deng2004

2004

A

Sys

tem

T

est

ing,

R

egre

ssio

n

Test

ing

OO

U

ML

Use

Case

, C

lass

, S

teta

chart

, S

eque

nce,

Cola

bora

tion, A

ctiv

ity

Dia

gra

ms

ISS

EE

“1/1

(1st

optio

n);

0/4

(2nd

optio

n)”

Both

: 0.

Nec

essa

ry (

Initi

al

Behav

ior

Mode

l Modelin

g)

Regre

ssio

n T

est

ing: 1,

2,3,4

. M

edi

um (

Modelin

g a

nd

Sel

ect

ion

of th

e T

est

Cas

es for

Regre

ssio

n T

est

ing)

No

Friedm

an

2002

2002

B

Sys

tem

T

est

ing

Conco

rrent

S

oftw

are

Murφ

D

esc

ript

ion

Languag

e

Fin

ite S

tate

Mach

ine

GO

TC

HA

“4

/6”

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

3. Low

(O

nly

choic

es)

No

Garg

antin

i 1999

1999

B

Sys

tem

T

est

ing

Not defin

ed

SC

R

Spec

ifica

tions

SIS

Spe

cific

atio

n

Tool i

n Ja

va

that

con

stru

cts

a s

uite

of t

est

seque

nces

and

an o

ther

to

genera

te e

xtra

te

st s

eque

nces

fo

r dat

a boundar

ies.

“1/3

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

SM

V o

r S

pin

Garo

usi2

006

2006

C

Sys

tem

D

istr

ibut

ed

UM

L and

Cla

ss, S

equenc

e,

Not-

def

ined

“2/4

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

No

Page 105: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

105

Test

ing

(Str

ess

T

est

ing)

Sys

tem

T

est

Mode

l C

onte

xt, N

etw

ork

D

eplo

yment

(Pack

age

Dia

gra

m),

and M

odifi

ed

Inte

ract

ion

Ove

rvie

w

(Int

era

ctio

n O

verv

iew

D

iagra

m)

Dia

gra

ms

(UM

L),

Test

Model

(c

ont

rol f

low

model,

netw

ork

in

terc

onnec

tivity

tre

e,

netw

ork

tra

ffic

usa

ge

pattern

s, a

nd

inte

r-S

D

cons

train

ts)

Mode

l Modelin

g)

2. M

ediu

m (

Tes

t Mode

l M

ode

ling)

Gnes

i2004

2004

C

Sys

tem

T

est

ing

OO

U

ML

and

IOLT

S

Sta

tech

art

Dia

gra

m

and IO

LT

S

Not-

def

ined

“2/3

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

IOLT

S

Gro

ss2005

2005

C

Inte

gra

tion

Test

ing

OO

U

ML2.

0 and

TT

CN

-3

U2T

P a

nd T

TC

N-3

N

ot-

def

ined

“2/3

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

TT

CN

-3

Hart

man2

004

2004

A

Sys

tem

T

est

ing

Dis

trib

uted

Sys

tem

U

ML,

IF

and

XM

L

* B

ehav

ior

mod

el:

com

bin

atio

n of c

lass

st

ate

and o

bjec

t dia

gra

ms

*

Tes

t G

ener

atio

n D

irect

ives

: S

tate

D

iagra

m

* T

est E

xecu

tion

Direct

ives

: X

ML

file

AG

ED

IS

“3/6

” 1,2

,3.

Nec

essa

ry (

Initi

al B

eha

vior

Mode

l Modelin

g)

Inte

rmedia

te

Form

at

Hart

mann

2000

2000

C

Inte

gra

tion

Test

ing,

U

nit

Test

ing

OO

U

ML

and

TS

L

Sta

tech

art

Dia

gra

m

and G

raph

T

nT

(T

DE

and

TE

Cs)

“3

/4”

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

Gra

ph

Hong2000

2000

B

Sys

tem

T

est

ing

Not defin

ed

ST

AT

EM

AT

E, E

xtend

ed

Fin

ite S

tate

M

achin

e

Sta

tech

art

, E

xten

ded

Fin

ite S

tate

Mach

ine

ST

AT

EM

AT

E

tools

et

“5/6

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

EF

SM

, flo

w

gra

ph

Kans

omke

at

2003

2003

A

Sys

tem

T

est

ing

OO

U

ML

and

EF

SM

S

tate

chart

Dia

gra

m,

Test

ing

Flo

w G

raph

Not-

def

ined

“2/3

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

Test

ing

Flo

w

Gra

ph

Kim

1999

1999

C

Unit

Test

ing

OO

U

ML

and

EF

SM

S

tate

Dia

gra

m, E

FS

M

Not-

def

ined

“3/4

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

Ext

ende

d F

inite

S

tate

Mac

hin

e

Legeard

2004

2004

D

Sys

tem

T

est

ing

Critic

al

Softw

are

B

or

Z

Languag

e

Form

al S

pec

ifica

tion

(B o

r Z

) B

Z-T

T

“4/6

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

4. Low

(O

nly

choic

es a

bout

the

test

ing c

over

age

crite

ria

to

reduc

e th

e tes

t ca

ses

set)

BZ

P F

orm

at

Lin

zhang2004

2004

A

Sys

tem

T

est

ing

OO

U

ML

Act

ivity

Dia

gra

m

UM

LT

GF

“2

/3”

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Liu

ying199

9

1999

A

Sys

tem

T

est

ing

OO

U

ML,

Fin

ite

Sta

te

Mac

hin

e

Sta

te D

iagra

m, F

SM

R

ose

98

“3/7

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

3.1

. Low

(O

nly

choic

es)

3.2

Low

(O

nly

choic

es)

3.3

Low

(O

nly

choic

es)

Fin

ite S

tate

M

achin

e

Page 106: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

106

Luci

o2005

2005

A

Sys

tem

T

est

ing

OO

F

ondue

* C

once

pt M

odel:

Cla

ss D

iagra

m

(Fondue)

* B

ehav

ior

Mod

el:

Env

ironm

ent

(U

ML

colla

bora

tion

dia

gra

ms)

, P

roto

col

(UM

L st

ate

dia

gra

ms)

and O

pera

tion (

OC

L opera

tions

) (F

ond

ue).

Not

Def

ined

“3/5

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

2,4

. M

ediu

m (

Inte

rmedia

te M

ode

l M

ode

ling

or

Tes

t D

ata

Defin

ition)

No

Lund200

6

2006

A

Sys

tem

T

est

ing

Not defin

ed

UM

L 2.

0

Sequenc

e D

iagra

m

Not-

def

ined

“0/0

” N

ot A

pplie

d

No

Mand

rioli1

995

1999

B

Sys

tem

T

est

ing

Real-tim

e

Sys

tem

and C

ritic

al

Softw

are

TR

IO

Languag

e

Softw

are

Spec

ifica

tion

in T

RIO

P

roto

type

Tool

is

des

crib

ed

“3/6

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

2. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

3. Low

(O

nly

choic

es)

No

Meye

r1998

1998

A

Sys

tem

T

est

ing

(GU

I T

est

ing)

Not defin

ed

UM

L

Use

Case

+

Opera

tional P

rofil

e –

Test

Engin

eering

Mode

l

Test

Mast

er

and

Win

Run

ner

“2/5

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

3. H

igh

(Tra

nsla

tion o

r oth

er

task

[e

xten

sion

of m

odels

with

h

hys

ics

info

rmatio

n about

the

soft

ware

])

5. H

igh

(Tra

nsla

tion o

r oth

er

task

)

No

Min

gso

ng

2006

2006

A

Sys

tem

T

est

ing

Java

P

rogra

ms

UM

L

Act

ivity

Dia

gra

m

AG

TC

G

“1/4

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

2. M

ediu

m (

Code

in

stru

men

tatio

n)

3. Low

(O

nly

choic

es)

No

Murt

hy2

006

2006

A

Sys

tem

T

est

ing

Not defin

ed

UM

L and

Conte

xt F

ree

Gra

mm

ar

Mode

l

Ext

ende

d S

tate

chart

D

iagra

m, E

XT

EN

DE

D

CT

GM

Test

genera

tion

Tool

“3/3

” 0. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Nebut2

006

2006

A

Sys

tem

T

est

ing

Obj

ect-

orient

ed

Em

bedded

S

oftw

are

UM

L and

OC

L

Use

Case

Dia

gra

m,

Sequenc

e D

iagra

m,

Sim

ula

tion

Model

U

C-S

yste

m

“3/5

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

2. M

ediu

m (

Inte

rmedia

te M

odel

Mode

ling)

Sim

ula

tion

Mode

l

Offut

1999a

1999

B

Sys

tem

T

est

ing,

U

nit

Test

ing

Critic

al

Softw

are

S

OF

L, D

NF

, T

SL

DN

F,

CD

FD

(S

OF

L),

S

-Module

(S

OF

L),

I-

Modu

le (

SO

FL)

Not

Def

ined

“2/4

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

2. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

DN

F

Offut

t1999b

1999

A

Sys

tem

T

est

ing

OO

U

ML

Sta

tech

art

Dia

gra

m

UM

LT

est

“3/4

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Offut

t2003

2003

A

Sys

tem

T

est

ing

OO

U

ML

and

Gra

ph

Sta

tech

art

Dia

gra

m,

Spec

ifica

tion G

raph

S

pec

Test

“6

/8”

0. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

6. H

igh

(Tra

nsla

tion o

r oth

er

task

) 8. M

ediu

m (

Tes

t D

ata

Defin

ition)

Spec

ifica

tion

Gra

ph

Olim

pie

w2005

2005

A

Sys

tem

T

est

ing

Softw

are

P

roduc

t Lin

e

PLU

S –

U

ML2.

0

Featu

re m

odel,

use

case

model,

stat

ic

[cla

ss] and

dyn

amic

[s

tate

char

t and

obj

ect

Not-

def

ined

“0/0

” N

ot A

pplie

d

No

Page 107: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

107

inte

ract

ion] m

ode

ls,

com

pon

ent

-bas

ed

soft

ware

arc

hite

ctur

e

mode

ls

Para

dkar

2004a

2004

B

Sys

tem

T

est

ing

React

ive

Sys

tem

SA

LT

, E

xtende

d F

inite

Sta

te

Mac

hin

e

Opera

tional M

odel

(SA

LT

), E

xtend

ed

Fin

ite S

tate

Mach

ine

(EF

SM

)

Pla

nner

“2

/3”

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Paris

sis1

996

19996

B

Sys

tem

T

est

ing

Syn

chro

nous

S

oftw

are

(R

eact

ive

Softw

are

)

LU

ST

RE

E

nviro

nment

S

pec

ifica

tion a

nd

the

safe

ty p

ropert

ies

LE

SA

R

“4/5

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Pre

tsch

ner

2001

2001

B

Sys

tem

T

est

ing

React

ive

Sys

tem

s and

Em

bedded

S

yste

m

Auto

Foc

us,

Const

rain

t Logic

P

rogra

mm

ing

Sys

tem

Str

uct

ure

D

iagra

ms,

Sta

te

Tra

nsi

tion D

iagra

m,

Mes

sage

Sequen

ce

Chart

s, D

ata

Typ

e

Dia

gra

m

Auto

Foc

us

"4/5

" 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

CLP

Code

Ric

hard

son

1992

1992

D

Sys

tem

T

est

ing

React

ive

syst

ems

Not defin

ed,

m

ay

be

chang

ed.

In t

he c

ase

stud

y,

RT

IL a

nd

Z

Spec

ifica

tion

Not defin

ed

“2/4

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

3. M

ediu

m (

Inte

rmedia

te M

odel

Mode

ling

or

Tes

t D

ata

Defin

ition)

No

Ric

hard

son

1996

1996

D

Inte

gra

tion

Test

ing

Not defin

ed

Chem

ical

Abst

ract

M

achin

e

CH

AM

N

ot

Def

ined

“2/4

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

2. Low

(O

nly

choic

es)

No

Rie

bis

ch2002

2002

A

Sys

tem

T

est

ing

OO

U

ML

and

Gra

ph

Use

Case

, S

tate

chart

D

iagra

m, G

raph

Mode

l an

d U

sage

M

ode

l

Usa

geT

este

r T

ool

“5/6

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

Gra

ph

Mode

l and U

sage

M

ode

l

Rum

pe2003

2003

A

Sys

tem

T

est

ing

OO

U

ML

Obj

ect D

iagr

am,

Sequenc

e D

iagra

m,

OC

L

Not-

def

ined

“0/3

” 1,2

,3.

Hig

h (

Tra

nsla

tion

or

other

ta

sk)

No

Satp

ath

y2005

2005

B

Sys

tem

T

est

ing

Not defin

ed

B S

peci

ficat

ion,

XM

L, P

rolo

g R

epre

sent

atio

n

Sta

te C

overa

ge G

raph

(B

) P

roT

est

“7/8

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

XM

L F

ile a

nd

Pro

log

Repre

sent

atio

n

Sch

eetz

1999

1999

C

Inte

gra

tion

Test

ing

OO

U

ML

Cla

ss D

iagra

m a

nd

Sta

te D

iagra

m +

OC

L

UC

PO

P 4

.0

(Pla

nne

r)

“2/4

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

2. M

ediu

m (

Inte

rmedia

te M

odel

Mode

ling)

Test

Pla

n

Sin

ha20

06a

2006

D

Inte

gra

tion

Test

ing

Web

se

rvic

e

WS

DL-

S,

EF

SM

W

SD

L-S

, E

FS

M

Not

Def

ined

“2/3

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

Ext

ende

d F

inite

S

tate

Mac

hin

e

Soke

nou20

06

2006

C

Inte

gra

tion

Test

ing,

U

nit

Test

ing

OO

U

ML

Sequenc

e D

iagra

m,

Pro

toco

l Sta

tech

art

, O

CL

Not-

def

ined

“3/4

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Sto

bie2005

a

2005

B

Sys

tem

T

est

ing

Any

Cate

gor

y A

SM

L, X

ML

Fin

ite S

tate

Mach

ine

(FS

M)

and A

bstr

act

Sta

te M

achin

e Languag

e (A

SM

L)

TM

T a

nd

Asm

L

Test

Tool

“3/4

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Sto

cks1

996

1996

D

Unit

Test

ing

Not defin

ed

Z N

ota

tion,

TS

L

Test

Tem

pla

te,

desc

ribin

g va

lid in

puts

and o

utp

uts

The fra

mew

ork

“0

/0”

Not A

pplie

d

No

Page 108: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

108

Tahat2

001

2001

B

Sys

tem

T

est

ing

and

Regre

ssio

n

Test

ing

Dis

trib

uted

Sys

tem

and

Em

bedded

S

yste

m

SD

L, E

FS

M

Requirem

ents

exp

ress

ed in

Text

ual

and S

DL form

ats

, E

FS

M

Not

Def

ined

“3/5

and

2/2

For

Sys

tem

Tes

ting:

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

4. Low

(O

nly

choic

es)

EF

SM

Tan2004

2004

B

Sys

tem

T

est

ing

Not defin

ed

linear

tem

pora

l lo

gic

– L

TL

Softw

are

Spec

ifica

ton

in L

TL

Mode

l Che

cker

: S

MV

and

SP

IN

“3/4

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Tra

ore

200

3

2003

A

Sys

tem

T

est

ing

OO

U

ML

Sta

tech

art

, C

lass

and

Sequenc

e D

iagra

ms

PrU

DE

“4

/6”

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

2. M

ediu

m (

Ext

ende

d M

odel

Mode

ling)

No

Vie

ira20

06

2006

A

Sys

tem

T

est

ing

(GU

I T

est

ing)

Not defin

ed

UM

L and

TS

L

Use

Case

+ A

ctiv

ity

Dia

gra

m a

nd

Cla

ss

Dia

gra

m

TD

E/U

ML a

nd

Ecl

ipse

(S

CR

) U

ML

Edito

r “2

/4”

1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

2. M

ediu

m (

Inte

rmedia

te M

odel

Mode

ling)

Cla

ss D

iagra

m

and N

otes

on

Act

ivity

D

iagra

m

Wu2

003

2003

C

Inte

gra

tion

Test

ing

Com

ponent

-base

d S

oftw

are

U

ML

UM

L C

olla

bora

tion/

S

equenc

e D

iagra

m o

r U

ML

Sta

tech

art

D

iagra

m to

des

crib

e

Conte

xt-d

epe

nden

ce

rela

tions

hip

s U

ML

Colla

bora

tion

Dia

gra

m o

r U

ML

Sta

tech

art

Dia

gra

m to

desc

ribe C

onte

nt-

depende

nce

rela

tions

hip

s

Not

Def

ined

“0/0

” N

ece

ssary

(In

itial B

ehav

ior

Mode

l Modelin

g)

No

Xu2006a

2006

D

Inte

gra

tion

Test

ing

Asp

ect

-orient

ed

pro

gra

ms

AO

P

Sta

te M

odel (

AO

P)

Not

Def

ined

“4/5

” 1. N

ece

ssar

y (I

niti

al B

ehav

ior

Mode

l Modelin

g)

No

Zhen2004

2004

C

Inte

gra

tion

Test

ing

OO

U

ML

2.0

Inte

ract

ion,

Act

ivity

and S

tate

Dia

gram

s N

ot-

def

ined

“0/4

” 1,2

,3,4

. H

igh (

Tra

nsla

tion

or

oth

er tas

k)

No

Approach

Non-functional

requirements

Inputs

Input Interpretation

Outputs

Output Interpretation

External Software

Abdura

zik

2000

N

o

UM

L C

olla

bora

tion

Dia

gram

in

a form

at n

ot de

fined

Not defin

ed,

but

UM

L us

e M

OF

to

defin

e it

s m

odels

. T

his

sta

ndard

may

be in

terp

rete

d eas

ily

Test

data

N

ot defin

ed

No

Ali2

005

No

UM

L C

olla

bora

tion

Dia

gram

in

XM

I fil

e a

nd

test

dat

a in

te

xt file

Yes.

XM

I is

the

sta

rndard

for

UM

L

Mode

l, an

d the

text

file

could

be

easi

ly in

terp

rete

d

Pass

/Fail

Resu

lts

(Test

Res

ults

)

Easy

. T

he P

ass/

Fail

resu

lts a

re

logged

in a

file

. F

or

faile

d tes

t ca

ses,

the

Res

ults

Eva

luato

r als

o

logs

the

test

pat

h num

ber

, m

essa

ge,

and

obj

ect s

tate

s.

UM

L M

odelin

g T

ool

that

gener

ate

s X

MI

file

(Toget

her

is

quote

d o

n th

e text

)

Am

man

n199

4

No

Softw

are

Spec

ifica

tion

Z f

orm

at

Test

scr

ipts

T

SL F

orm

at

No

Andre

ws2

005

No

Test

Mode

l: LW

P in

FS

M

form

at, A

FS

M, all

cons

train

ts

on in

put

sele

ctio

n

Yes

Test

Sequen

ces

Eva

lid S

crip

ts

No

Barb

ey1

996

No

Softw

are

Spec

ifica

tion

CO

-OP

N/2

for

ma

t T

est

res

ults

N

ot defin

ed

No

Page 109: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

109

Basa

nie

ri 20

00

No

UM

L dia

gram

s N

ot defin

ed,

but

UM

L us

e M

OF

to

defin

e it

s m

odels

. T

his

sta

ndard

may

be in

terp

rete

d eas

ily

Use

Case

Tes

t S

uite

(T

est

cas

es s

et)

N

ot defin

ed

No

Belle

ttin

i2005

N

o

Web

applic

atio

ns U

ML

mode

l writt

en

in X

MI, a

Web

serv

er

log file

, and

XM

L co

nfig

urat

ion

files

spec

ifyin

g W

eb s

erv

er

addre

ss,

info

rmatio

n a

bout

metr

ics

and c

over

age

use

Yes

Test

cas

e sp

ecifi

catio

n desc

ribed

usin

g X

ML, w

ith t

he

form

at

spec

ified

by

a D

TD

fil

e.

Easy

, bec

aus

e th

ere

is a

DT

D file

to

suppo

rt the

out

put

inte

rpre

tatio

n

Web

UM

L

Bern

ard

2006

N

o

UM

L M

odels

usi

ng

the

Togeth

er

Modelin

g T

ool

Yes

Test

Res

ults

after

the

test

s exe

cutio

n

Yes

Togeth

er

Bert

olin

o2003

No

UM

L D

iagra

ms

usin

g U

ML

Com

ponent

s no

tatio

n

Easy

, bec

aus

e th

is form

at u

ses

MO

F

and it

is in

terp

rete

d by

the

Co

w_S

uite

tool

Test

Pro

cedure

s in

JU

nit

or W

CT

for

mat

Yes

Not defin

ed

Bert

olin

o2005

No

UM

L S

equ

enc

e and

Sta

te

Dia

gra

m

Not defin

ed,

but

UM

L us

e M

OF

to

defin

e it

s m

odels

. T

his

sta

ndard

may

be in

terp

rete

d eas

ily

SE

Qrc

tha

t re

pre

sent

s a

coher

ent

inte

gra

tion

of th

e in

itial m

odels

Not defin

ed

No

Beye

r2003

N

o

UM

L S

equ

enc

e D

iagra

m in

X

MI fil

e

Yes

Test

Cas

es in

T

TC

N-3

form

at

Yes

UM

L M

odelin

g T

ool

that

gener

ate

s X

MI

file

Bota

schan

jan20

04

No

UM

L M

odels

and

OC

L in

a

form

at n

ot d

efin

ed

by

the

paper

Not defin

ed

Exp

ect

ed

resu

lt and/o

r O

CL

contr

act

as

test

ora

cle

Not defin

ed

Not defin

ed

Bous

quet

199

9a

Secu

rity

P

ropert

ies

Env

ironm

ent

des

crip

tion

and

an o

racl

e

LU

ST

RE

Form

at

Test

sequ

enc

es

LU

ST

RE

Form

at

No

Briand2

002

No

Mode

l behav

ior

usi

ng

TO

TE

M

Yes,

bec

aus

e it

is d

one

in the

sam

e to

ol t

o g

enera

te tes

t cas

es.

Test

cas

es a

nd

Tes

t O

racl

es in

OC

L

exp

ress

ions

Easy

. T

his

not

atio

n (O

CL)

is

incl

uded

on

UM

L and

it's

schem

a is

defin

ed

by

OM

G.

No

Briand2

002a

N

o

UM

L dia

gram

s of

tw

o s

yste

m

vers

ions

(XM

I fil

es

pro

duc

ed

by

UM

L c

ase

tools

) alo

ng

with

the

origin

al r

egre

ssio

n

test

suite

Yes.

XM

I is

the

sta

nda

rd to U

ML

defin

ition.

Info

rmat

ion

abo

ut

the

change

s betw

een t

he

mode

ls

and r

e-c

lass

ifica

tion

of te

st c

ases

aft

er

the

change

s

Yes,

it's

a r

epo

rt.

UM

L M

odelin

g T

ool

that

gener

ate

s X

MI

file

Briand2

002b

N

o

UM

L C

lass

Dia

gram

N

ot defin

ed,

but

UM

L us

e M

OF

to

defin

e it

s m

odels

. T

his

sta

ndard

may

be in

terp

rete

d eas

ily

Cla

sses

Ord

er for

Inte

gra

tion T

estin

g

Not defin

ed,

but

it m

ust be

eas

y,

beca

use

des

crib

ed

just

one

ord

ere

d lis

t of

cla

sses.

N

ot defin

ed

Briand2

004a

N

o

Input

s: in

form

atio

n o

n (1

) st

ate

chart

s (inc

ludin

g st

ate

in

variant

s), and

opera

tions

cont

ract

s, (

2)

the

clas

s dia

gra

m (

ope

ratio

n si

gna

ture

s, a

ttribut

es,

ass

ocia

tions

), (

3)

the T

TS

, (4

) eq

uiv

ale

nt nav

igatio

n path

s (in

the

form

of a

user

help

er

file)

in the

class

dia

gra

m.

Easy

. X

MI F

ile a

nd

OC

L. A

nd ther

e

are

rule

s to

tra

nsla

te S

tate

chart

in

DN

F e

asi

ly. T

here

is a

meta

model

pre

-def

ined.

The c

ons

train

ts for

the

diff

ere

nt

trans

itions

in the

TT

S.

Not defin

ed

UM

L M

odelin

g T

ool

that

gener

ate

s X

MI

file

Briand2

004b

N

o

UM

L S

tate

chart

Dia

gra

m in

a

form

at n

ot s

pec

ified

Not defin

ed,

but

UM

L us

e M

OF

to

defin

e it

s m

odels

. T

his

sta

ndard

may

be in

terp

rete

d eas

ily

Path

s (D

ata

flow

) usi

ng E

AF

Gs

Not eas

y N

ot defin

ed

Page 110: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

110

Briand2

006

No

Com

ponent

meta

dat

a and

Test

Spec

ifica

tion p

rovi

ded

by

the c

omponent

vendor

and c

omponent

use

r

Not eas

y. T

he

input

s in

volv

e U

ML,

O

CL a

nd C

SP

E c

ons

train

ts

Test

Sequen

ces

base

d o

n the

DR

PP

so

lutio

n

Easy

. P

ath

s (s

equenc

es)

sim

ilar

to

Tra

velin

g S

ale

sman

Pro

ble

m

No

Carp

ent

er1

999

No

The a

ppro

ach

is a

bstr

act

, so

, th

e in

put are

UM

L M

odel

s and T

est

Data

Arg

ument

s and T

ypes

Easy

to d

efin

e

Test

Cas

es a

nd

Tes

t S

crip

ts

Easy

to d

efin

e

Test

-harn

ess

genera

tion tool t

o co

nvert

test

case

s in

to test

scr

ipts

Cav

arr

a2003

N

o

UM

L sp

ecifi

catio

n in

XM

I fil

e

Yes.

It's

a s

tandar

d to

UM

L def

initi

on

Test

cas

es is

pro

vides

in T

ree a

nd

Tabula

r C

ombin

ed

Nota

tion

(TT

CN

) -

a S

tand

ard

form

at in

th

e te

leco

mm

unic

atio

n in

dus

try

Easy

. T

his

out

put

can

be

eas

ily

trans

late

d to

pro

duc

e tes

t ca

ses

in

the

lang

uage

of any

AP

I, w

heth

er

this

is C

, C

++

, or

Jav

a.

UM

L M

odelin

g to

ol

that

gener

ate

s X

MI

file,

like

rose

, obj

ecte

ering,

to

get

her

, or

pose

idon

Chang199

9

No

AD

L S

pec

ifica

tion a

nd

Test

D

ata

Desc

riptio

n (T

DD

) A

DL form

at

Test

driv

ers

C

Form

at

AD

L

Chen2002

N

o

Act

ivity

Dia

gra

ms

for

eac

h fe

ature

and the

test

cas

es

Not defin

ed

Subs

et of

tes

t ca

ses

sele

cted

from

the

full

set fo

r one

sof

tware

Y

es,

just

one

list

of t

est c

ases

N

o

Chen2005

N

o

SO

FL S

pec

ifica

tion

SO

FL L

angu

age

Test

cas

es

Not defin

ed

No

Chev

alle

y2001

No

A s

et o

f st

ate

dia

gra

ms

S

repre

sent

ing s

upe

r-st

ate

s of

th

e (s

ub)s

yste

m u

nder

stud

y.

Yes,

usi

ng

UM

L S

tate

Dia

gra

m

Tw

o o

utp

uts.

1.

outp

uts

pro

duce

d by

sim

ula

tion o

f th

e

mode

l with

the

input

va

lues.

2. f

ile

repre

sent

s a J

ava

clas

s th

at c

onta

ins

the

inpu

t valu

es

trans

late

d in

to J

ava

inst

ruct

ions

Yes.

Jav

a c

lass

R

ose

RealT

ime

tool

Crich

ton20

01

No

UM

L S

pec

ifica

tion

in X

MI fil

e

Yes.

It's

a s

tandar

d to

UM

L def

initi

on

Test

suite

s in

Tre

e and T

abula

r C

om

bin

ed

Nota

tion

(TT

CN

)

Yes.

This

outp

ut c

an

be

trans

late

d to

pro

duce

tes

t ca

ses

in the

la

nguag

e o

f any

AP

I, w

heth

er

this

is

C,

C+

+,

or J

ava.

UM

L M

odelin

g T

ool

that

gener

ate

s X

MI

file

Dala

l1999

N

o

Test

data

mode

ling

Not defin

ed

Test

Cas

es

Not defin

ed

No

Deng2004

N

o

* S

yste

m T

est

ing: U

ML

Mode

ls u

sing

IS

SE

E

* R

egre

ssio

n T

estin

g: s

et o

f te

st c

ases

pro

vided

by

SS

DM

Not defin

ed

* S

yste

m T

est

ing:

Ne

w T

est

case

s *

Reg

ress

ion

Test

ing:

Tes

t ca

ses

rela

ted

to the

chang

ed

ope

ratio

n

Not defin

ed

No

Friedm

an200

2

No

behav

iora

l mod

el o

f the

syst

em u

nde

r te

st, w

ritt

en

in

the

Murφ

Des

crip

tion

Languag

e fo

r E

FS

Ms,

set of

co

vera

ge

crite

ria

and

test

co

nstr

ain

ts w

ritt

en

in the

synta

x giv

en a

bov

e.

Form

at d

escr

ibed

on the

paper

Test

cas

es

Not defin

ed

No

Page 111: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

111

Garg

antin

i1999

N

o

SC

R s

pec

ifica

tion

Yes

Test

Sequen

ces

XM

L file

s S

MV

or

Spin

model

ch

ecke

r

Garo

usi2

006

Effic

ienc

y an

d R

elia

bili

ty

UM

L D

iagra

ms

in a

not

-sp

ecifi

ed

form

at

Not defin

ed

Str

ess

Test

R

equirem

ents

N

ot defin

ed

Not defin

ed

Gnes

i2004

N

o

Sta

tech

art

Dia

gra

m

Yes

test

cas

es w

ritt

en

in

a form

al l

ang

uage

N

ot eas

y A

uto

Foc

us a

nd

TG

V/A

GE

DIS

Gro

ss2005

No

Sys

tem

UM

L M

ode

ls

dev

eloped

man

ually

H

ard

. T

here

is n

ot tool

to

suppor

t the

input im

port

atio

n

Test

cas

es in

TT

CN

-3 form

at

Yesy

. T

his

not

atio

n is

use

d us

ually

to

des

crib

e te

st c

ases

No

Hart

man2

004

No

a)

the

beh

avio

ral m

ode

l of

the

syst

em, b)

the

test

exe

cutio

n dire

ctiv

es w

hic

h

desc

ribe the

test

ing

arc

hite

cture

of th

e S

UT

, and

c) t

he tes

t ge

nera

tion

direc

tives

whic

h d

escr

ibe

the

stra

tegie

s to

be e

mplo

yed

in

test

ing the

SU

T.

A a

nd B

are

eas

y be

caus

e U

ML

to

desc

ribe the

mod

els

. C

is e

asy

bec

aus

e us

e a

def

ined

X

ML s

chem

a.

Abst

ract

tes

t su

ite

cons

istin

g of

tes

t ca

ses

whic

h c

over

the

des

ired tes

t genera

tion

direc

tives

.

Yes.

The tes

t gener

ato

r is

bas

ed o

n

the

TG

V e

ngin

e, b

ut w

ith a

ddi

tional

cove

rage

and

obs

erv

atio

n

capab

ilitie

s deriv

ed

from

the

GO

TC

HA

tes

t ge

nera

tor.

This

fo

rmat

is a

lready

def

ined

.

UM

L m

odelin

g to

ol

equip

ped w

ith the

AG

ED

IS U

ML

pro

file (

e.g

. O

bjec

teering

UM

L M

ode

ler)

Hart

mann

2000

N

o

The m

ode

ling

indiv

idual o

r co

llect

ions

of co

mpo

nent

s usi

ng U

ML

Sta

tech

art

s, a

nd

est

ablis

hin

g a g

lobal

behav

iora

l mod

el o

f the

com

pos

ed S

tate

chart

s.

Easy

usi

ng

Rat

ional

Ros

e

A S

et o

f co

nform

ance

tes

ts

desc

ribed

in T

SL.

T

hese

tes

t cas

es

ensu

re t

he

com

plia

nce o

f the

desi

gn s

peci

ficat

ion

with

the

resu

lting

imple

ment

atio

n.

Easy

. T

SL c

an b

e in

terp

rete

d fo

r a

lot of t

est exe

cutio

n to

ols

R

atio

nal R

ose

Hong2000

N

o

Sta

tech

art

S

TA

TE

MA

TE

for

mat

Test

Sequen

ces

Not defin

ed

No

Kans

omke

at2003

N

o

Sequenc

e D

iagra

ma

usin

g R

atio

nal R

ose

tool

Yes,

but

it's

nec

essa

ry li

cense

to

use

this

tool

T

est

cas

es in

the

co

njunc

tive

form

Y

es,

bec

aus

e it

util

izes

a kn

ow

-fo

rmat

R

atio

nal R

ose

Kim

1999

No

UM

L S

tate

Dia

gra

m

Not defin

ed,

but

UM

L us

e M

OF

to

defin

e it

s m

odels

. T

his

sta

ndard

may

be in

terp

rete

d eas

ily

Test

cas

es

Not defin

ed

Not defin

ed

Legeard

2004

N

o

B o

r Z

Model f

rom

Ate

lier

B

Medi

um

Test

Scr

ipts

N

ot defin

ed

Ate

lier

B

Lin

zhang2004

No

UM

L S

pec

ifica

tion

in M

DL

pla

in t

ext

file

(gener

ate

d b

y R

atio

nal R

ose

tool)

Yes,

bec

aus

e it

is a

sta

ndard

nota

tion

Test

cas

es fro

m

act

ivity

dia

gram

s usi

ng U

MLT

GF

tool

Yes,

bec

aus

e th

e to

ol s

upport

the

genera

tion a

nd

view

of

the tes

t ca

ses.

R

atio

nal R

ose

Liu

ying199

9

No

UM

L S

tate

Dia

gra

m

UM

L S

pec

ifica

tion

gen

era

ted

by

Rose

98

Test

sequ

enc

e

Not defin

ed

No

Luci

o2005

N

o

* F

ondue

Conc

ept and

Behav

ior

Mode

l usi

ng

XM

I fil

e.

* O

bserv

alit

y hyp

oth

eses

usi

ng C

OO

PN

spec

ifica

tions.

* F

ondue

Mode

ls: T

his

XM

I can

be

easi

ly lo

aded.

*

Obs

erv

alit

y hyp

oth

eses

: N

ot e

asy

. T

hese

hyp

othes

es a

re b

ase

d o

n te

mpora

l logic

for

mula

s th

at allo

w u

s to

exp

ress

tes

t in

tentio

ns for

CO

OP

N

spec

ifica

tions

usi

ng

a la

ngu

age

dev

eloped

by

the

rese

arc

h gro

up

resp

ons

ible

by

this

app

roac

h.

Test

cas

es

exp

ress

ed b

y th

e te

mpora

l logic

fo

rmula

s w

hic

h a

re

the

main

bas

is o

f the

test

/cons

train

t la

nguag

e tha

t is

under

dev

elo

pment

by

this

res

earc

h

gro

up.

Not eas

y A

stra

Quic

kTes

t -

Exe

cutio

n

Page 112: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

112

Lund200

6

No

Dia

gra

ms

that

may

cont

ain

th

e oper

ators

neg

and

asse

rt

and the

test

s gene

rate

d are

th

emse

lves

repre

sente

d as

se

que

nce

dia

gram

s.

Not eas

y. It use

s S

equan

ce d

iagra

m

and S

TA

IRS

sem

antic

s to

des

crib

e

the

softw

are

behav

ior,

and

this

la

nguag

e is

not

inte

rpre

taed

eas

ily.

Test

cas

es w

ith

them

opera

tors

neg

and a

sser

t

Not eas

y. T

he

test

cas

e are

genera

ted

usin

g the

sam

e S

TA

IRS

se

mant

ics.

N

o

Mand

rioli1

995

Tim

e R

equirem

ents

S

oftw

are

Spec

ifica

tions

are

in

text

file

s Y

es.

Text

file

s T

est

res

ults

N

ot defin

ed

His

tory

chec

ker

Meye

r1998

No

Use

Case

+ O

pera

tional

Pro

file

Not eas

y. T

here

are

a lo

t of

com

plex

steps

to

dev

elo

p the

OP

Test

scr

ipt outp

ut

stre

am o

n th

e for

mat

Win

Run

ner

Scr

ipts

.

Yes,

scr

ipts

in t

he T

SL form

at to

be

exe

cute

d by

Win

Runn

er

No

Min

gso

ng200

6

No

Act

ivity

Dia

gra

m a

nd J

ava

Code S

ourc

e

Easy

T

est

s exe

cutio

n re

sults

in t

he J

ava

Pro

gra

m

Easy

N

o

Murt

hy2

006

No

Ext

ende

d S

tate

chart

D

iagra

m

Easy

(nee

d li

cens

e)

Test

Scr

ipts

for

eac

h

state

chart

dia

gra

m

Easy

(ja

va c

ode)

Ratio

nal R

ose

Nebut2

006

No

Use

Case

Dia

gra

m a

nd

the

cont

ract

of eac

h us

e ca

se

Not eas

y. F

or

eac

h us

e c

ase

mus

t be d

efin

ed

cont

ract

s us

ing

logic

opera

tors

to

be in

terp

rete

d dur

ing

the

test

cas

es g

enera

tion.

Test

cas

es

Easy

. T

est ca

ses

in the for

m o

f Ja

va c

lass

es, usi

ng

the J

Unit

fram

ew

ork

as

a tes

t harn

ess.

JU

nit

Offut

1999a

N

o

Mode

ls in

SO

FL

SO

FL L

angu

age

Test

cas

es

TS

L F

orm

at

Not defin

ed

Offut

t1999b

Effic

ienc

y R

ose

spec

ifica

tion file

(M

DL

file)

Easy

(nee

d li

cens

e)

Test

cas

es in

an

AS

CII text

file

E

asy

R

atio

nal R

ose

Offut

t2003

No

Tra

nsi

tion c

ond

itions

direc

tly

from

SC

R tabl

es

and fro

m

UM

L sp

ecifi

catio

ns

Not eas

y. T

he

step

of t

rans

latin

g te

st

spec

ifica

tion

valu

es in

to p

rogr

am

inputs

mus

t be

exe

cute

d m

anu

ally

. T

est

scr

ipts

T

he form

at is

not

def

ined

and

SP

EC

TE

ST

does

n't

suppor

t this

st

ep.

SC

RT

ool a

nd

Ratio

nal R

ose

Olim

pie

w2005

No

PLU

S M

odels

Y

es.

The p

aper

desc

ribed

how

to

inte

rpre

te t

he m

odel u

sed

by

the

appro

ach

Test

Cas

e fo

r an

Use

Case

genera

ted

random

icaly

Yes.

Test

case

s are

des

crib

ed

on

a ta

ble

with

the

ir sp

ecifi

c fie

lds

(input

s, o

utput

s,re

quire

ment

s, it

ens

, needs

)

Not defin

ed

Para

dkar2

004a

N

o

Opera

tional M

odel (

SA

LT

) Y

es

Test

cas

es

Not defin

ed

No

Paris

sis1

996

Secu

rity

P

ropert

ies

Env

ironm

ent

spec

ifica

tion

and s

afet

y pr

oper

ties

LU

ST

RE

Form

at

Test

data

to

chec

k th

e sy

stem

(incl

udin

g th

e sa

fety

pro

pert

ies)

LU

ST

RE

Form

at

No

Pre

tsch

ner2

001

T

ime

Requirem

ents

A

uto

Foc

us M

odel

genera

ted

by

Aut

oF

ocus

T

est

s V

eredic

ts

genera

ted

by

Aut

oF

ocus

N

ot defin

ed

Ric

hard

son199

2

Effic

ienc

y an

d S

ecu

rity

S

yste

m S

pec

ifica

tion

Not defin

ed

Test

Ora

cles

N

ot defin

ed

Not defin

ed

Ric

hard

son199

6

No

Chem

ical A

bst

ract

Mach

ine

Not defin

ed

Test

data

N

ot defin

ed

No

Rie

bis

ch2002

No

Use

Case

s ac

cord

ing

with

a

spec

ific

tem

pla

te

Desc

ribed

on a

table

(on

the p

aper)

, it'

s ea

sy to b

e in

terp

rete

d

Failu

re d

ata

and

usa

ge m

odel;

m

etric

s fo

r te

st

suff

icie

ncy

and

pro

duc

t qu

alit

y

The form

at is

not

des

crib

ed, but

th

ese

info

rmat

ion a

re in

terp

rete

d

easi

ly

No

Rum

pe2003

Effic

ienc

y O

bjec

t , S

eque

nce D

iagra

m,

and O

CL fro

m a

not

def

ined

fo

rmat

or

tool

These

dia

gra

ms

usu

ally

use

the

M

OF

, and

it's

eas

y to

inte

rpre

t th

is

form

at

Test

cas

es m

odele

d

in U

ML

Easy

The a

ppro

ach

sugge

rs the

use

of

tools

to

acco

mplis

h th

e m

odel

tr

ans

latio

ns,

but

it's

not def

ined

one

tool

Page 113: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

113

Satp

ath

y2005

No

B S

peci

ficat

ion

B la

nguage

T

est

verd

icts

for

Java

Pro

gra

ms

Not defin

ed

No

Sch

eetz

1999

No

UM

L M

odels

N

ot defin

ed,

but

UM

L us

e M

OF

to

defin

e it

s m

odels

. T

his

sta

ndard

may

be in

terp

rete

d eas

ily

Test

cas

es in

not

sp

ecifi

ed

form

at

Not defin

ed

UC

PO

P 4

.0 -

P

lanner

Sin

ha20

06a

No

WS

DL-

S m

odel

Yes.

This

is a

ext

ensi

on

of t

he

inte

rnat

ional s

tandar

d to

des

crib

e

web s

erv

ice

ES

FM

Y

es

No

Soke

nou20

06

No

UM

L S

equ

enc

e and

Sta

te

Dia

gra

m

Not defin

ed,

but

UM

L us

e M

OF

to

defin

e it

s m

odels

. T

his

sta

ndard

may

be in

terp

rete

d eas

ily

Test

cas

es in

a

form

at n

ot s

pec

ified

N

ot defin

ed

Not defin

ed

Sto

bie2005

a

No

Spec

ifica

tion in

AS

ML

Yes

Test

cas

es in

XM

L

Yes

No

Sto

cks1

996

No

Input

Spac

e

Yes.

Z n

otat

ion

Outp

ut S

pac

e

Yes.

Z n

otat

ion

Not defin

ed

Tahat2

001

No

Softw

are

Requirem

ent

s exp

ress

ed in

Text

ual a

nd S

DL

form

ats

Test

cas

es

Not defin

ed

Not defin

ed

Tan2004

Secu

rity

and

E

ffic

ienc

y (T

ime)

S

oftw

are

Spec

ifica

tion

LT

L F

orm

at

Test

cas

es

LT

S F

orm

at

Check

ers

: S

MV

and

SP

IN

Tra

ore

200

3

No

UM

L S

pec

ifica

tion

in X

MI fil

e

Yes

Test

cas

es

defin

itions

and

test

s re

sults

in P

RU

DE

T

oolk

it fo

rmat

Not defin

ed.

There

is a

tool

to

suppo

rt the

mode

l cre

atio

n

UM

L M

odelin

g to

ol

that

gener

ate

s X

MI

file

Vie

ira20

06

No

Act

ivity

Dia

gra

m

Easy

- E

clis

e D

iagra

m

Test

Scr

ipts

in T

SL

form

at

Easy

- u

sed

for

seve

ral r

epla

y ca

pture

tools

E

clip

se -

ada

pted

to

the

appr

oac

h

Wu2

003

No

UM

L D

iagra

ms

des

crib

ing

cont

ext

depend

enc

e re

latio

nship

s and

cont

ent

depende

nce

rela

tionsh

ips

Not defin

ed,

but

UM

L us

e M

OF

to

defin

e it

s m

odels

. T

his

sta

ndard

may

be in

terp

rete

d eas

ily

Not defin

ed

Not defin

ed

Not defin

ed

Xu2006a

No

Sta

te m

odelin

g (c

lass

and

asp

ects

) N

ot defin

ed

Test

res

ults

N

ot defin

ed

No

Zhen2004

Effic

ienc

y D

esi

gn

Mode

l Sys

tem

in

UM

L 2.

0 form

at

Easy

. It's

not def

ined

a s

pec

ific

form

at, bu

t U

ML

2.0

use

s M

OF

to

defin

ed

the m

odels

and

could

be

easi

ly in

terp

rete

d

Test

cas

es

Not defin

ed

No

Page 114: Characterization of Model-based Software Testing  · PDF fileCharacterization of Model-based Software Testing Approaches Systematic Review

Cha

ract

eriz

atio

n of

Mod

el-b

ased

Sof

twar

e T

estin

g A

ppr

oach

es

Sys

tem

atic

Rev

iew

Thi

s do

cum

ent

has

been

dev

elop

ed t

o be

a jo

int S

CR

/CO

PP

E T

echn

ical

Rep

ort a

t PE

SC

/CO

PP

E/U

FR

J -B

razi

l

114